00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2462 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3723 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.467 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.480 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.494 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.494 > git config core.sparsecheckout # timeout=10 00:00:07.506 > git read-tree -mu HEAD # timeout=10 00:00:07.522 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.544 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.544 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.637 [Pipeline] Start of Pipeline 00:00:07.650 [Pipeline] library 00:00:07.652 Loading library shm_lib@master 00:00:07.652 Library shm_lib@master is cached. Copying from home. 00:00:07.667 [Pipeline] node 00:00:07.691 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:07.693 [Pipeline] { 00:00:07.701 [Pipeline] catchError 00:00:07.702 [Pipeline] { 00:00:07.711 [Pipeline] wrap 00:00:07.717 [Pipeline] { 00:00:07.723 [Pipeline] stage 00:00:07.724 [Pipeline] { (Prologue) 00:00:07.741 [Pipeline] echo 00:00:07.743 Node: VM-host-SM0 00:00:07.751 [Pipeline] cleanWs 00:00:07.762 [WS-CLEANUP] Deleting project workspace... 00:00:07.762 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.768 [WS-CLEANUP] done 00:00:07.982 [Pipeline] setCustomBuildProperty 00:00:08.061 [Pipeline] httpRequest 00:00:08.407 [Pipeline] echo 00:00:08.409 Sorcerer 10.211.164.20 is alive 00:00:08.422 [Pipeline] retry 00:00:08.431 [Pipeline] { 00:00:08.485 [Pipeline] httpRequest 00:00:08.488 HttpMethod: GET 00:00:08.489 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.489 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.491 Response Code: HTTP/1.1 200 OK 00:00:08.491 Success: Status code 200 is in the accepted range: 200,404 00:00:08.492 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.650 [Pipeline] } 00:00:09.665 [Pipeline] // retry 00:00:09.673 [Pipeline] sh 00:00:09.958 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.973 [Pipeline] httpRequest 00:00:10.383 [Pipeline] echo 00:00:10.385 Sorcerer 10.211.164.20 is alive 00:00:10.395 [Pipeline] retry 00:00:10.397 [Pipeline] { 00:00:10.412 [Pipeline] httpRequest 00:00:10.416 HttpMethod: GET 00:00:10.417 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.418 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.426 Response Code: HTTP/1.1 200 OK 00:00:10.427 Success: Status code 200 is in the accepted range: 200,404 00:00:10.428 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:17.043 [Pipeline] } 00:01:17.060 [Pipeline] // retry 00:01:17.067 [Pipeline] sh 00:01:17.347 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:19.897 [Pipeline] sh 00:01:20.177 + git -C spdk log --oneline -n5 00:01:20.177 c13c99a5e test: Various fixes for Fedora40 00:01:20.177 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:20.177 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:20.177 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:20.177 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:20.194 [Pipeline] writeFile 00:01:20.209 [Pipeline] sh 00:01:20.490 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.501 [Pipeline] sh 00:01:20.780 + cat autorun-spdk.conf 00:01:20.780 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.780 SPDK_TEST_NVMF=1 00:01:20.780 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.780 SPDK_TEST_VFIOUSER=1 00:01:20.780 SPDK_TEST_USDT=1 00:01:20.780 SPDK_RUN_UBSAN=1 00:01:20.780 SPDK_TEST_NVMF_MDNS=1 00:01:20.780 NET_TYPE=virt 00:01:20.780 SPDK_JSONRPC_GO_CLIENT=1 00:01:20.780 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.787 RUN_NIGHTLY=1 00:01:20.789 [Pipeline] } 00:01:20.802 [Pipeline] // stage 00:01:20.817 [Pipeline] stage 00:01:20.819 [Pipeline] { (Run VM) 00:01:20.831 [Pipeline] sh 00:01:21.110 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.110 + echo 'Start stage prepare_nvme.sh' 00:01:21.110 Start stage prepare_nvme.sh 00:01:21.110 + [[ -n 5 ]] 00:01:21.110 + disk_prefix=ex5 00:01:21.110 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:21.110 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:21.110 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:21.110 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.110 ++ SPDK_TEST_NVMF=1 00:01:21.110 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.110 ++ SPDK_TEST_VFIOUSER=1 00:01:21.110 ++ SPDK_TEST_USDT=1 00:01:21.110 ++ SPDK_RUN_UBSAN=1 00:01:21.110 ++ SPDK_TEST_NVMF_MDNS=1 00:01:21.110 ++ NET_TYPE=virt 00:01:21.110 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:21.110 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.110 ++ RUN_NIGHTLY=1 00:01:21.110 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.110 + nvme_files=() 00:01:21.110 + declare -A nvme_files 00:01:21.110 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.110 + nvme_files['nvme.img']=5G 00:01:21.110 + nvme_files['nvme-cmb.img']=5G 00:01:21.110 + nvme_files['nvme-multi0.img']=4G 00:01:21.110 + nvme_files['nvme-multi1.img']=4G 00:01:21.110 + nvme_files['nvme-multi2.img']=4G 00:01:21.110 + nvme_files['nvme-openstack.img']=8G 00:01:21.110 + nvme_files['nvme-zns.img']=5G 00:01:21.110 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.110 + (( SPDK_TEST_FTL == 1 )) 00:01:21.110 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.110 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.110 + for nvme in "${!nvme_files[@]}" 00:01:21.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:21.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.110 + for nvme in "${!nvme_files[@]}" 00:01:21.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:21.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.110 + for nvme in "${!nvme_files[@]}" 00:01:21.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:21.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:21.111 + for nvme in "${!nvme_files[@]}" 00:01:21.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:21.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.111 + for nvme in "${!nvme_files[@]}" 00:01:21.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:21.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.111 + for nvme in "${!nvme_files[@]}" 00:01:21.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:21.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.111 + for nvme in "${!nvme_files[@]}" 00:01:21.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:21.368 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.368 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:21.368 + echo 'End stage prepare_nvme.sh' 00:01:21.368 End stage prepare_nvme.sh 00:01:21.378 [Pipeline] sh 00:01:21.655 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.655 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:21.655 00:01:21.655 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:21.655 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:21.655 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.655 HELP=0 00:01:21.655 DRY_RUN=0 00:01:21.655 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:21.655 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.655 NVME_AUTO_CREATE=0 00:01:21.655 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:21.655 NVME_CMB=,, 00:01:21.655 NVME_PMR=,, 00:01:21.655 NVME_ZNS=,, 00:01:21.655 NVME_MS=,, 00:01:21.655 NVME_FDP=,, 00:01:21.655 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.655 SPDK_VAGRANT_VMCPU=10 00:01:21.655 SPDK_VAGRANT_VMRAM=12288 00:01:21.655 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.655 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.655 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.655 SPDK_OPENSTACK_NETWORK=0 00:01:21.655 VAGRANT_PACKAGE_BOX=0 00:01:21.655 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.655 FORCE_DISTRO=true 00:01:21.655 VAGRANT_BOX_VERSION= 00:01:21.655 EXTRA_VAGRANTFILES= 00:01:21.655 NIC_MODEL=e1000 00:01:21.655 00:01:21.655 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:21.655 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.185 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.752 ==> default: Creating image (snapshot of base box volume). 00:01:25.011 ==> default: Creating domain with the following settings... 00:01:25.011 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734157898_6990fa5d8759e8b2a56f 00:01:25.011 ==> default: -- Domain type: kvm 00:01:25.011 ==> default: -- Cpus: 10 00:01:25.011 ==> default: -- Feature: acpi 00:01:25.011 ==> default: -- Feature: apic 00:01:25.011 ==> default: -- Feature: pae 00:01:25.011 ==> default: -- Memory: 12288M 00:01:25.011 ==> default: -- Memory Backing: hugepages: 00:01:25.011 ==> default: -- Management MAC: 00:01:25.011 ==> default: -- Loader: 00:01:25.011 ==> default: -- Nvram: 00:01:25.011 ==> default: -- Base box: spdk/fedora39 00:01:25.011 ==> default: -- Storage pool: default 00:01:25.011 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734157898_6990fa5d8759e8b2a56f.img (20G) 00:01:25.011 ==> default: -- Volume Cache: default 00:01:25.011 ==> default: -- Kernel: 00:01:25.011 ==> default: -- Initrd: 00:01:25.011 ==> default: -- Graphics Type: vnc 00:01:25.011 ==> default: -- Graphics Port: -1 00:01:25.011 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.011 ==> default: -- Graphics Password: Not defined 00:01:25.011 ==> default: -- Video Type: cirrus 00:01:25.011 ==> default: -- Video VRAM: 9216 00:01:25.011 ==> default: -- Sound Type: 00:01:25.011 ==> default: -- Keymap: en-us 00:01:25.011 ==> default: -- TPM Path: 00:01:25.011 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.011 ==> default: -- Command line args: 00:01:25.011 ==> default: -> value=-device, 00:01:25.011 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:25.011 ==> default: -> value=-drive, 00:01:25.011 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.012 ==> default: -> value=-device, 00:01:25.012 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.012 ==> default: -> value=-device, 00:01:25.012 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:25.012 ==> default: -> value=-drive, 00:01:25.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.012 ==> default: -> value=-device, 00:01:25.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.012 ==> default: -> value=-drive, 00:01:25.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.012 ==> default: -> value=-device, 00:01:25.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.012 ==> default: -> value=-drive, 00:01:25.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.012 ==> default: -> value=-device, 00:01:25.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.272 ==> default: Creating shared folders metadata... 00:01:25.272 ==> default: Starting domain. 00:01:27.178 ==> default: Waiting for domain to get an IP address... 00:01:45.263 ==> default: Waiting for SSH to become available... 00:01:46.638 ==> default: Configuring and enabling network interfaces... 00:01:50.851 default: SSH address: 192.168.121.100:22 00:01:50.851 default: SSH username: vagrant 00:01:50.851 default: SSH auth method: private key 00:01:53.383 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.502 ==> default: Mounting SSHFS shared folder... 00:02:02.563 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:02.563 ==> default: Checking Mount.. 00:02:03.939 ==> default: Folder Successfully Mounted! 00:02:03.939 ==> default: Running provisioner: file... 00:02:04.874 default: ~/.gitconfig => .gitconfig 00:02:05.133 00:02:05.133 SUCCESS! 00:02:05.133 00:02:05.133 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.133 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.133 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.133 00:02:05.142 [Pipeline] } 00:02:05.156 [Pipeline] // stage 00:02:05.164 [Pipeline] dir 00:02:05.165 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:05.166 [Pipeline] { 00:02:05.178 [Pipeline] catchError 00:02:05.179 [Pipeline] { 00:02:05.191 [Pipeline] sh 00:02:05.471 + vagrant ssh-config --host vagrant 00:02:05.471 + sed -ne /^Host/,$p 00:02:05.471 + tee ssh_conf 00:02:08.758 Host vagrant 00:02:08.758 HostName 192.168.121.100 00:02:08.758 User vagrant 00:02:08.758 Port 22 00:02:08.758 UserKnownHostsFile /dev/null 00:02:08.758 StrictHostKeyChecking no 00:02:08.758 PasswordAuthentication no 00:02:08.758 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.758 IdentitiesOnly yes 00:02:08.758 LogLevel FATAL 00:02:08.758 ForwardAgent yes 00:02:08.758 ForwardX11 yes 00:02:08.758 00:02:08.771 [Pipeline] withEnv 00:02:08.773 [Pipeline] { 00:02:08.788 [Pipeline] sh 00:02:09.079 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:09.079 source /etc/os-release 00:02:09.079 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.079 # Minimal, systemd-like check. 00:02:09.079 if [[ -e /.dockerenv ]]; then 00:02:09.079 # Clear garbage from the node's name: 00:02:09.079 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.079 # $HOSTNAME is the actual container id 00:02:09.079 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.079 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.079 # We can assume this is a mount from a host where container is running, 00:02:09.079 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.079 container="$(< /etc/hostname) ($agent)" 00:02:09.079 else 00:02:09.079 # Fallback 00:02:09.079 container=$agent 00:02:09.079 fi 00:02:09.079 fi 00:02:09.079 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.079 00:02:09.107 [Pipeline] } 00:02:09.123 [Pipeline] // withEnv 00:02:09.131 [Pipeline] setCustomBuildProperty 00:02:09.146 [Pipeline] stage 00:02:09.148 [Pipeline] { (Tests) 00:02:09.165 [Pipeline] sh 00:02:09.445 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.718 [Pipeline] sh 00:02:09.998 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.270 [Pipeline] timeout 00:02:10.270 Timeout set to expire in 1 hr 0 min 00:02:10.272 [Pipeline] { 00:02:10.286 [Pipeline] sh 00:02:10.565 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:11.132 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:11.144 [Pipeline] sh 00:02:11.424 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:11.697 [Pipeline] sh 00:02:11.980 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.253 [Pipeline] sh 00:02:12.549 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:12.549 ++ readlink -f spdk_repo 00:02:12.549 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.549 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.549 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.549 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.549 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.549 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.549 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.549 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:12.549 + cd /home/vagrant/spdk_repo 00:02:12.549 + source /etc/os-release 00:02:12.549 ++ NAME='Fedora Linux' 00:02:12.549 ++ VERSION='39 (Cloud Edition)' 00:02:12.549 ++ ID=fedora 00:02:12.549 ++ VERSION_ID=39 00:02:12.549 ++ VERSION_CODENAME= 00:02:12.549 ++ PLATFORM_ID=platform:f39 00:02:12.549 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.549 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.549 ++ LOGO=fedora-logo-icon 00:02:12.549 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.549 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.549 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.549 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.549 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.549 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.549 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.549 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.549 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.549 ++ SUPPORT_END=2024-11-12 00:02:12.549 ++ VARIANT='Cloud Edition' 00:02:12.549 ++ VARIANT_ID=cloud 00:02:12.549 + uname -a 00:02:12.549 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.549 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.808 Hugepages 00:02:12.808 node hugesize free / total 00:02:12.808 node0 1048576kB 0 / 0 00:02:12.808 node0 2048kB 0 / 0 00:02:12.809 00:02:12.809 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.809 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.809 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.809 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:12.809 + rm -f /tmp/spdk-ld-path 00:02:12.809 + source autorun-spdk.conf 00:02:12.809 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.809 ++ SPDK_TEST_NVMF=1 00:02:12.809 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.809 ++ SPDK_TEST_VFIOUSER=1 00:02:12.809 ++ SPDK_TEST_USDT=1 00:02:12.809 ++ SPDK_RUN_UBSAN=1 00:02:12.809 ++ SPDK_TEST_NVMF_MDNS=1 00:02:12.809 ++ NET_TYPE=virt 00:02:12.809 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:12.809 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.809 ++ RUN_NIGHTLY=1 00:02:12.809 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.809 + [[ -n '' ]] 00:02:12.809 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:13.068 + for M in /var/spdk/build-*-manifest.txt 00:02:13.068 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:13.068 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.068 + for M in /var/spdk/build-*-manifest.txt 00:02:13.068 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.068 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.068 + for M in /var/spdk/build-*-manifest.txt 00:02:13.068 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.068 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.068 ++ uname 00:02:13.068 + [[ Linux == \L\i\n\u\x ]] 00:02:13.068 + sudo dmesg -T 00:02:13.068 + sudo dmesg --clear 00:02:13.068 + dmesg_pid=5236 00:02:13.068 + [[ Fedora Linux == FreeBSD ]] 00:02:13.068 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.068 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.068 + sudo dmesg -Tw 00:02:13.068 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.068 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.068 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.068 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.068 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.068 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.068 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.068 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.068 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.068 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.068 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.068 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.068 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.068 Test configuration: 00:02:13.068 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.068 SPDK_TEST_NVMF=1 00:02:13.068 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.068 SPDK_TEST_VFIOUSER=1 00:02:13.068 SPDK_TEST_USDT=1 00:02:13.068 SPDK_RUN_UBSAN=1 00:02:13.068 SPDK_TEST_NVMF_MDNS=1 00:02:13.068 NET_TYPE=virt 00:02:13.068 SPDK_JSONRPC_GO_CLIENT=1 00:02:13.068 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.068 RUN_NIGHTLY=1 06:32:26 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:13.068 06:32:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:13.068 06:32:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.068 06:32:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.068 06:32:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.068 06:32:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.068 06:32:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.068 06:32:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.068 06:32:26 -- paths/export.sh@5 -- $ export PATH 00:02:13.068 06:32:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.068 06:32:26 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:13.068 06:32:26 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:13.068 06:32:26 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734157946.XXXXXX 00:02:13.068 06:32:26 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734157946.x2Af7B 00:02:13.068 06:32:26 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:13.068 06:32:26 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:13.068 06:32:26 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:13.068 06:32:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:13.068 06:32:26 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.068 06:32:26 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:13.068 06:32:26 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:13.068 06:32:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.068 06:32:27 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:13.068 06:32:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:13.068 06:32:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:13.068 06:32:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:13.068 06:32:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:13.068 Sat Dec 14 06:32:27 AM UTC 2024 00:02:13.068 06:32:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:13.068 LTS-67-gc13c99a5e 00:02:13.068 06:32:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:13.068 06:32:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:13.068 06:32:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:13.068 06:32:27 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:13.068 06:32:27 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:13.068 06:32:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.068 ************************************ 00:02:13.068 START TEST ubsan 00:02:13.068 ************************************ 00:02:13.068 using ubsan 00:02:13.068 06:32:27 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:13.068 00:02:13.068 real 0m0.000s 00:02:13.068 user 0m0.000s 00:02:13.068 sys 0m0.000s 00:02:13.068 06:32:27 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:13.068 06:32:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.068 ************************************ 00:02:13.068 END TEST ubsan 00:02:13.068 ************************************ 00:02:13.327 06:32:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:13.327 06:32:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:13.327 06:32:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:13.327 06:32:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:13.586 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:13.586 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.844 Using 'verbs' RDMA provider 00:02:29.290 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:41.525 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:41.525 go version go1.21.1 linux/amd64 00:02:41.525 Creating mk/config.mk...done. 00:02:41.525 Creating mk/cc.flags.mk...done. 00:02:41.525 Type 'make' to build. 00:02:41.525 06:32:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:41.525 06:32:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:41.525 06:32:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:41.525 06:32:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.525 ************************************ 00:02:41.525 START TEST make 00:02:41.525 ************************************ 00:02:41.525 06:32:55 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:41.526 make[1]: Nothing to be done for 'all'. 00:02:42.900 The Meson build system 00:02:42.900 Version: 1.5.0 00:02:42.900 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:42.900 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:42.900 Build type: native build 00:02:42.900 Project name: libvfio-user 00:02:42.900 Project version: 0.0.1 00:02:42.900 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.900 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.900 Host machine cpu family: x86_64 00:02:42.900 Host machine cpu: x86_64 00:02:42.900 Run-time dependency threads found: YES 00:02:42.900 Library dl found: YES 00:02:42.900 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.900 Run-time dependency json-c found: YES 0.17 00:02:42.900 Run-time dependency cmocka found: YES 1.1.7 00:02:42.900 Program pytest-3 found: NO 00:02:42.900 Program flake8 found: NO 00:02:42.900 Program misspell-fixer found: NO 00:02:42.900 Program restructuredtext-lint found: NO 00:02:42.900 Program valgrind found: YES (/usr/bin/valgrind) 00:02:42.900 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.900 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.900 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.900 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.900 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:42.900 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:42.900 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.900 Build targets in project: 8 00:02:42.900 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:42.900 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:42.900 00:02:42.900 libvfio-user 0.0.1 00:02:42.900 00:02:42.900 User defined options 00:02:42.900 buildtype : debug 00:02:42.900 default_library: shared 00:02:42.900 libdir : /usr/local/lib 00:02:42.900 00:02:42.900 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.466 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:43.724 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:43.724 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:43.724 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:43.724 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:43.724 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:43.724 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:43.724 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:43.724 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:43.724 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:43.724 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:43.724 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:43.724 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:43.724 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:43.724 [14/37] Compiling C object samples/null.p/null.c.o 00:02:43.983 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:43.983 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:43.983 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:43.983 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:43.983 [19/37] Compiling C object samples/server.p/server.c.o 00:02:43.983 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:43.983 [21/37] Compiling C object samples/client.p/client.c.o 00:02:43.983 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:43.983 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:43.983 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:43.983 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:43.983 [26/37] Linking target samples/client 00:02:43.983 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:43.983 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:44.241 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:44.241 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:44.241 [31/37] Linking target test/unit_tests 00:02:44.241 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:44.241 [33/37] Linking target samples/server 00:02:44.241 [34/37] Linking target samples/lspci 00:02:44.241 [35/37] Linking target samples/gpio-pci-idio-16 00:02:44.241 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:44.241 [37/37] Linking target samples/null 00:02:44.241 INFO: autodetecting backend as ninja 00:02:44.241 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:44.500 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:45.067 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:45.067 ninja: no work to do. 00:02:53.176 The Meson build system 00:02:53.176 Version: 1.5.0 00:02:53.176 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:53.176 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:53.176 Build type: native build 00:02:53.176 Program cat found: YES (/usr/bin/cat) 00:02:53.176 Project name: DPDK 00:02:53.176 Project version: 23.11.0 00:02:53.176 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.176 C linker for the host machine: cc ld.bfd 2.40-14 00:02:53.176 Host machine cpu family: x86_64 00:02:53.176 Host machine cpu: x86_64 00:02:53.176 Message: ## Building in Developer Mode ## 00:02:53.176 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.176 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:53.176 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.176 Program python3 found: YES (/usr/bin/python3) 00:02:53.176 Program cat found: YES (/usr/bin/cat) 00:02:53.176 Compiler for C supports arguments -march=native: YES 00:02:53.176 Checking for size of "void *" : 8 00:02:53.176 Checking for size of "void *" : 8 (cached) 00:02:53.176 Library m found: YES 00:02:53.176 Library numa found: YES 00:02:53.176 Has header "numaif.h" : YES 00:02:53.176 Library fdt found: NO 00:02:53.176 Library execinfo found: NO 00:02:53.176 Has header "execinfo.h" : YES 00:02:53.176 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.176 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.176 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.176 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.176 Run-time dependency openssl found: YES 3.1.1 00:02:53.176 Run-time dependency libpcap found: YES 1.10.4 00:02:53.176 Has header "pcap.h" with dependency libpcap: YES 00:02:53.176 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.176 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.176 Compiler for C supports arguments -Wformat: YES 00:02:53.176 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.176 Compiler for C supports arguments -Wformat-security: NO 00:02:53.176 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.176 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.176 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.176 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.176 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.176 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.176 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.176 Compiler for C supports arguments -Wundef: YES 00:02:53.176 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.176 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.176 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.176 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.176 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:53.176 Program objdump found: YES (/usr/bin/objdump) 00:02:53.176 Compiler for C supports arguments -mavx512f: YES 00:02:53.176 Checking if "AVX512 checking" compiles: YES 00:02:53.176 Fetching value of define "__SSE4_2__" : 1 00:02:53.176 Fetching value of define "__AES__" : 1 00:02:53.176 Fetching value of define "__AVX__" : 1 00:02:53.176 Fetching value of define "__AVX2__" : 1 00:02:53.176 Fetching value of define "__AVX512BW__" : (undefined) 00:02:53.176 Fetching value of define "__AVX512CD__" : (undefined) 00:02:53.176 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:53.176 Fetching value of define "__AVX512F__" : (undefined) 00:02:53.176 Fetching value of define "__AVX512VL__" : (undefined) 00:02:53.176 Fetching value of define "__PCLMUL__" : 1 00:02:53.176 Fetching value of define "__RDRND__" : 1 00:02:53.176 Fetching value of define "__RDSEED__" : 1 00:02:53.176 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.176 Fetching value of define "__znver1__" : (undefined) 00:02:53.176 Fetching value of define "__znver2__" : (undefined) 00:02:53.176 Fetching value of define "__znver3__" : (undefined) 00:02:53.176 Fetching value of define "__znver4__" : (undefined) 00:02:53.176 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.176 Message: lib/log: Defining dependency "log" 00:02:53.176 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.176 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.176 Checking for function "getentropy" : NO 00:02:53.177 Message: lib/eal: Defining dependency "eal" 00:02:53.177 Message: lib/ring: Defining dependency "ring" 00:02:53.177 Message: lib/rcu: Defining dependency "rcu" 00:02:53.177 Message: lib/mempool: Defining dependency "mempool" 00:02:53.177 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.177 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.177 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.177 Compiler for C supports arguments -mpclmul: YES 00:02:53.177 Compiler for C supports arguments -maes: YES 00:02:53.177 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.177 Compiler for C supports arguments -mavx512bw: YES 00:02:53.177 Compiler for C supports arguments -mavx512dq: YES 00:02:53.177 Compiler for C supports arguments -mavx512vl: YES 00:02:53.177 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.177 Compiler for C supports arguments -mavx2: YES 00:02:53.177 Compiler for C supports arguments -mavx: YES 00:02:53.177 Message: lib/net: Defining dependency "net" 00:02:53.177 Message: lib/meter: Defining dependency "meter" 00:02:53.177 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.177 Message: lib/pci: Defining dependency "pci" 00:02:53.177 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.177 Message: lib/hash: Defining dependency "hash" 00:02:53.177 Message: lib/timer: Defining dependency "timer" 00:02:53.177 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.177 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.177 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.177 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.177 Message: lib/power: Defining dependency "power" 00:02:53.177 Message: lib/reorder: Defining dependency "reorder" 00:02:53.177 Message: lib/security: Defining dependency "security" 00:02:53.177 Has header "linux/userfaultfd.h" : YES 00:02:53.177 Has header "linux/vduse.h" : YES 00:02:53.177 Message: lib/vhost: Defining dependency "vhost" 00:02:53.177 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.177 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.177 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.177 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.177 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:53.177 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:53.177 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:53.177 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:53.177 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:53.177 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:53.177 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:53.177 Configuring doxy-api-html.conf using configuration 00:02:53.177 Configuring doxy-api-man.conf using configuration 00:02:53.177 Program mandb found: YES (/usr/bin/mandb) 00:02:53.177 Program sphinx-build found: NO 00:02:53.177 Configuring rte_build_config.h using configuration 00:02:53.177 Message: 00:02:53.177 ================= 00:02:53.177 Applications Enabled 00:02:53.177 ================= 00:02:53.177 00:02:53.177 apps: 00:02:53.177 00:02:53.177 00:02:53.177 Message: 00:02:53.177 ================= 00:02:53.177 Libraries Enabled 00:02:53.177 ================= 00:02:53.177 00:02:53.177 libs: 00:02:53.177 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:53.177 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:53.177 cryptodev, dmadev, power, reorder, security, vhost, 00:02:53.177 00:02:53.177 Message: 00:02:53.177 =============== 00:02:53.177 Drivers Enabled 00:02:53.177 =============== 00:02:53.177 00:02:53.177 common: 00:02:53.177 00:02:53.177 bus: 00:02:53.177 pci, vdev, 00:02:53.177 mempool: 00:02:53.177 ring, 00:02:53.177 dma: 00:02:53.177 00:02:53.177 net: 00:02:53.177 00:02:53.177 crypto: 00:02:53.177 00:02:53.177 compress: 00:02:53.177 00:02:53.177 vdpa: 00:02:53.177 00:02:53.177 00:02:53.177 Message: 00:02:53.177 ================= 00:02:53.177 Content Skipped 00:02:53.177 ================= 00:02:53.177 00:02:53.177 apps: 00:02:53.177 dumpcap: explicitly disabled via build config 00:02:53.177 graph: explicitly disabled via build config 00:02:53.177 pdump: explicitly disabled via build config 00:02:53.177 proc-info: explicitly disabled via build config 00:02:53.177 test-acl: explicitly disabled via build config 00:02:53.177 test-bbdev: explicitly disabled via build config 00:02:53.177 test-cmdline: explicitly disabled via build config 00:02:53.177 test-compress-perf: explicitly disabled via build config 00:02:53.177 test-crypto-perf: explicitly disabled via build config 00:02:53.177 test-dma-perf: explicitly disabled via build config 00:02:53.177 test-eventdev: explicitly disabled via build config 00:02:53.177 test-fib: explicitly disabled via build config 00:02:53.177 test-flow-perf: explicitly disabled via build config 00:02:53.177 test-gpudev: explicitly disabled via build config 00:02:53.177 test-mldev: explicitly disabled via build config 00:02:53.177 test-pipeline: explicitly disabled via build config 00:02:53.177 test-pmd: explicitly disabled via build config 00:02:53.177 test-regex: explicitly disabled via build config 00:02:53.177 test-sad: explicitly disabled via build config 00:02:53.177 test-security-perf: explicitly disabled via build config 00:02:53.177 00:02:53.177 libs: 00:02:53.177 metrics: explicitly disabled via build config 00:02:53.177 acl: explicitly disabled via build config 00:02:53.177 bbdev: explicitly disabled via build config 00:02:53.177 bitratestats: explicitly disabled via build config 00:02:53.177 bpf: explicitly disabled via build config 00:02:53.177 cfgfile: explicitly disabled via build config 00:02:53.177 distributor: explicitly disabled via build config 00:02:53.177 efd: explicitly disabled via build config 00:02:53.177 eventdev: explicitly disabled via build config 00:02:53.177 dispatcher: explicitly disabled via build config 00:02:53.177 gpudev: explicitly disabled via build config 00:02:53.177 gro: explicitly disabled via build config 00:02:53.177 gso: explicitly disabled via build config 00:02:53.177 ip_frag: explicitly disabled via build config 00:02:53.177 jobstats: explicitly disabled via build config 00:02:53.177 latencystats: explicitly disabled via build config 00:02:53.177 lpm: explicitly disabled via build config 00:02:53.177 member: explicitly disabled via build config 00:02:53.177 pcapng: explicitly disabled via build config 00:02:53.177 rawdev: explicitly disabled via build config 00:02:53.177 regexdev: explicitly disabled via build config 00:02:53.177 mldev: explicitly disabled via build config 00:02:53.177 rib: explicitly disabled via build config 00:02:53.177 sched: explicitly disabled via build config 00:02:53.177 stack: explicitly disabled via build config 00:02:53.177 ipsec: explicitly disabled via build config 00:02:53.177 pdcp: explicitly disabled via build config 00:02:53.177 fib: explicitly disabled via build config 00:02:53.177 port: explicitly disabled via build config 00:02:53.177 pdump: explicitly disabled via build config 00:02:53.177 table: explicitly disabled via build config 00:02:53.177 pipeline: explicitly disabled via build config 00:02:53.177 graph: explicitly disabled via build config 00:02:53.177 node: explicitly disabled via build config 00:02:53.177 00:02:53.177 drivers: 00:02:53.177 common/cpt: not in enabled drivers build config 00:02:53.177 common/dpaax: not in enabled drivers build config 00:02:53.177 common/iavf: not in enabled drivers build config 00:02:53.177 common/idpf: not in enabled drivers build config 00:02:53.177 common/mvep: not in enabled drivers build config 00:02:53.177 common/octeontx: not in enabled drivers build config 00:02:53.177 bus/auxiliary: not in enabled drivers build config 00:02:53.177 bus/cdx: not in enabled drivers build config 00:02:53.177 bus/dpaa: not in enabled drivers build config 00:02:53.177 bus/fslmc: not in enabled drivers build config 00:02:53.177 bus/ifpga: not in enabled drivers build config 00:02:53.177 bus/platform: not in enabled drivers build config 00:02:53.177 bus/vmbus: not in enabled drivers build config 00:02:53.177 common/cnxk: not in enabled drivers build config 00:02:53.177 common/mlx5: not in enabled drivers build config 00:02:53.177 common/nfp: not in enabled drivers build config 00:02:53.177 common/qat: not in enabled drivers build config 00:02:53.177 common/sfc_efx: not in enabled drivers build config 00:02:53.177 mempool/bucket: not in enabled drivers build config 00:02:53.177 mempool/cnxk: not in enabled drivers build config 00:02:53.177 mempool/dpaa: not in enabled drivers build config 00:02:53.177 mempool/dpaa2: not in enabled drivers build config 00:02:53.177 mempool/octeontx: not in enabled drivers build config 00:02:53.177 mempool/stack: not in enabled drivers build config 00:02:53.177 dma/cnxk: not in enabled drivers build config 00:02:53.177 dma/dpaa: not in enabled drivers build config 00:02:53.177 dma/dpaa2: not in enabled drivers build config 00:02:53.177 dma/hisilicon: not in enabled drivers build config 00:02:53.177 dma/idxd: not in enabled drivers build config 00:02:53.177 dma/ioat: not in enabled drivers build config 00:02:53.177 dma/skeleton: not in enabled drivers build config 00:02:53.177 net/af_packet: not in enabled drivers build config 00:02:53.177 net/af_xdp: not in enabled drivers build config 00:02:53.177 net/ark: not in enabled drivers build config 00:02:53.177 net/atlantic: not in enabled drivers build config 00:02:53.177 net/avp: not in enabled drivers build config 00:02:53.177 net/axgbe: not in enabled drivers build config 00:02:53.177 net/bnx2x: not in enabled drivers build config 00:02:53.177 net/bnxt: not in enabled drivers build config 00:02:53.177 net/bonding: not in enabled drivers build config 00:02:53.177 net/cnxk: not in enabled drivers build config 00:02:53.177 net/cpfl: not in enabled drivers build config 00:02:53.177 net/cxgbe: not in enabled drivers build config 00:02:53.177 net/dpaa: not in enabled drivers build config 00:02:53.177 net/dpaa2: not in enabled drivers build config 00:02:53.177 net/e1000: not in enabled drivers build config 00:02:53.177 net/ena: not in enabled drivers build config 00:02:53.177 net/enetc: not in enabled drivers build config 00:02:53.177 net/enetfec: not in enabled drivers build config 00:02:53.177 net/enic: not in enabled drivers build config 00:02:53.177 net/failsafe: not in enabled drivers build config 00:02:53.177 net/fm10k: not in enabled drivers build config 00:02:53.177 net/gve: not in enabled drivers build config 00:02:53.177 net/hinic: not in enabled drivers build config 00:02:53.178 net/hns3: not in enabled drivers build config 00:02:53.178 net/i40e: not in enabled drivers build config 00:02:53.178 net/iavf: not in enabled drivers build config 00:02:53.178 net/ice: not in enabled drivers build config 00:02:53.178 net/idpf: not in enabled drivers build config 00:02:53.178 net/igc: not in enabled drivers build config 00:02:53.178 net/ionic: not in enabled drivers build config 00:02:53.178 net/ipn3ke: not in enabled drivers build config 00:02:53.178 net/ixgbe: not in enabled drivers build config 00:02:53.178 net/mana: not in enabled drivers build config 00:02:53.178 net/memif: not in enabled drivers build config 00:02:53.178 net/mlx4: not in enabled drivers build config 00:02:53.178 net/mlx5: not in enabled drivers build config 00:02:53.178 net/mvneta: not in enabled drivers build config 00:02:53.178 net/mvpp2: not in enabled drivers build config 00:02:53.178 net/netvsc: not in enabled drivers build config 00:02:53.178 net/nfb: not in enabled drivers build config 00:02:53.178 net/nfp: not in enabled drivers build config 00:02:53.178 net/ngbe: not in enabled drivers build config 00:02:53.178 net/null: not in enabled drivers build config 00:02:53.178 net/octeontx: not in enabled drivers build config 00:02:53.178 net/octeon_ep: not in enabled drivers build config 00:02:53.178 net/pcap: not in enabled drivers build config 00:02:53.178 net/pfe: not in enabled drivers build config 00:02:53.178 net/qede: not in enabled drivers build config 00:02:53.178 net/ring: not in enabled drivers build config 00:02:53.178 net/sfc: not in enabled drivers build config 00:02:53.178 net/softnic: not in enabled drivers build config 00:02:53.178 net/tap: not in enabled drivers build config 00:02:53.178 net/thunderx: not in enabled drivers build config 00:02:53.178 net/txgbe: not in enabled drivers build config 00:02:53.178 net/vdev_netvsc: not in enabled drivers build config 00:02:53.178 net/vhost: not in enabled drivers build config 00:02:53.178 net/virtio: not in enabled drivers build config 00:02:53.178 net/vmxnet3: not in enabled drivers build config 00:02:53.178 raw/*: missing internal dependency, "rawdev" 00:02:53.178 crypto/armv8: not in enabled drivers build config 00:02:53.178 crypto/bcmfs: not in enabled drivers build config 00:02:53.178 crypto/caam_jr: not in enabled drivers build config 00:02:53.178 crypto/ccp: not in enabled drivers build config 00:02:53.178 crypto/cnxk: not in enabled drivers build config 00:02:53.178 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.178 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.178 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.178 crypto/mlx5: not in enabled drivers build config 00:02:53.178 crypto/mvsam: not in enabled drivers build config 00:02:53.178 crypto/nitrox: not in enabled drivers build config 00:02:53.178 crypto/null: not in enabled drivers build config 00:02:53.178 crypto/octeontx: not in enabled drivers build config 00:02:53.178 crypto/openssl: not in enabled drivers build config 00:02:53.178 crypto/scheduler: not in enabled drivers build config 00:02:53.178 crypto/uadk: not in enabled drivers build config 00:02:53.178 crypto/virtio: not in enabled drivers build config 00:02:53.178 compress/isal: not in enabled drivers build config 00:02:53.178 compress/mlx5: not in enabled drivers build config 00:02:53.178 compress/octeontx: not in enabled drivers build config 00:02:53.178 compress/zlib: not in enabled drivers build config 00:02:53.178 regex/*: missing internal dependency, "regexdev" 00:02:53.178 ml/*: missing internal dependency, "mldev" 00:02:53.178 vdpa/ifc: not in enabled drivers build config 00:02:53.178 vdpa/mlx5: not in enabled drivers build config 00:02:53.178 vdpa/nfp: not in enabled drivers build config 00:02:53.178 vdpa/sfc: not in enabled drivers build config 00:02:53.178 event/*: missing internal dependency, "eventdev" 00:02:53.178 baseband/*: missing internal dependency, "bbdev" 00:02:53.178 gpu/*: missing internal dependency, "gpudev" 00:02:53.178 00:02:53.178 00:02:53.178 Build targets in project: 85 00:02:53.178 00:02:53.178 DPDK 23.11.0 00:02:53.178 00:02:53.178 User defined options 00:02:53.178 buildtype : debug 00:02:53.178 default_library : shared 00:02:53.178 libdir : lib 00:02:53.178 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:53.178 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:53.178 c_link_args : 00:02:53.178 cpu_instruction_set: native 00:02:53.178 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:53.178 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:53.178 enable_docs : false 00:02:53.178 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:53.178 enable_kmods : false 00:02:53.178 tests : false 00:02:53.178 00:02:53.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.178 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:53.178 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:53.178 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.178 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:53.178 [4/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:53.178 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.178 [6/265] Linking static target lib/librte_kvargs.a 00:02:53.178 [7/265] Linking static target lib/librte_log.a 00:02:53.178 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.178 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.436 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.694 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.951 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.952 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.952 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.952 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.952 [16/265] Linking static target lib/librte_telemetry.a 00:02:54.209 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:54.209 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:54.209 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.209 [20/265] Linking target lib/librte_log.so.24.0 00:02:54.468 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:54.468 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.468 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.468 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:54.468 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.727 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:54.987 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:54.987 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:54.987 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.987 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.987 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.987 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.987 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:55.245 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:55.245 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:55.503 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:55.503 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.503 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:55.503 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.503 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.503 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.503 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.761 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:56.019 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.019 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:56.019 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.019 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.019 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:56.277 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:56.277 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.535 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.793 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.793 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.793 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.793 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.793 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.051 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:57.051 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:57.051 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:57.051 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:57.051 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:57.051 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:57.309 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:57.567 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:57.826 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.826 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:57.826 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.826 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.826 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.084 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.084 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.084 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.084 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.084 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.084 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.084 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.342 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.342 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.600 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.858 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.858 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.116 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.116 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.117 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.117 [85/265] Linking static target lib/librte_eal.a 00:02:59.117 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.117 [87/265] Linking static target lib/librte_ring.a 00:02:59.375 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.375 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.375 [90/265] Linking static target lib/librte_rcu.a 00:02:59.375 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.633 [92/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.891 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.891 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.891 [95/265] Linking static target lib/librte_mempool.a 00:03:00.149 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.149 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.149 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.149 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:00.149 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.149 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:00.407 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.407 [103/265] Linking static target lib/librte_mbuf.a 00:03:01.033 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.033 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.033 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.033 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.033 [108/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.033 [109/265] Linking static target lib/librte_net.a 00:03:01.307 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:01.307 [111/265] Linking static target lib/librte_meter.a 00:03:01.307 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.565 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:01.565 [114/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.823 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.823 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.823 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.081 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.081 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.015 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:03.015 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:03.015 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.015 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:03.015 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.015 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.015 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.015 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:03.016 [128/265] Linking static target lib/librte_pci.a 00:03:03.274 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.274 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.274 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.274 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.532 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.532 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.532 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.791 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.791 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.791 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.791 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.791 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.049 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:04.049 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.049 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.307 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.307 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.307 [146/265] Linking static target lib/librte_cmdline.a 00:03:04.565 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.565 [148/265] Linking static target lib/librte_ethdev.a 00:03:04.823 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.823 [150/265] Linking static target lib/librte_timer.a 00:03:04.823 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:05.081 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:05.081 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.081 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:05.339 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.339 [156/265] Linking static target lib/librte_hash.a 00:03:05.339 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:05.339 [158/265] Linking static target lib/librte_compressdev.a 00:03:05.598 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.856 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:05.856 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:05.856 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.856 [163/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:05.856 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:06.114 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.372 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:06.372 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:06.372 [168/265] Linking static target lib/librte_dmadev.a 00:03:06.372 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.372 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.372 [171/265] Linking static target lib/librte_cryptodev.a 00:03:06.372 [172/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:06.629 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.629 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:06.630 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:07.195 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:07.195 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:07.195 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:07.195 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:07.195 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.196 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:07.454 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:07.454 [183/265] Linking static target lib/librte_power.a 00:03:07.454 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:07.454 [185/265] Linking static target lib/librte_reorder.a 00:03:07.712 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:07.970 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:07.970 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:07.970 [189/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.970 [190/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:08.229 [191/265] Linking static target lib/librte_security.a 00:03:08.487 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:08.746 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.746 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:08.746 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.005 [196/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.005 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:09.005 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.264 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:09.264 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.523 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.523 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.523 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.781 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.781 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.781 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.781 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.038 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.038 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.296 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.296 [211/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.296 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.296 [213/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.296 [214/265] Linking static target drivers/librte_bus_pci.a 00:03:10.296 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.296 [216/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.296 [217/265] Linking static target drivers/librte_bus_vdev.a 00:03:10.296 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:10.296 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.555 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.555 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.555 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.555 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.555 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:10.814 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.381 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.381 [227/265] Linking static target lib/librte_vhost.a 00:03:11.947 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.947 [229/265] Linking target lib/librte_eal.so.24.0 00:03:12.206 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:12.206 [231/265] Linking target lib/librte_pci.so.24.0 00:03:12.206 [232/265] Linking target lib/librte_meter.so.24.0 00:03:12.206 [233/265] Linking target lib/librte_timer.so.24.0 00:03:12.206 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:12.206 [235/265] Linking target lib/librte_ring.so.24.0 00:03:12.206 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:12.206 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:12.206 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:12.206 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:12.206 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:12.206 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:12.464 [242/265] Linking target lib/librte_mempool.so.24.0 00:03:12.464 [243/265] Linking target lib/librte_rcu.so.24.0 00:03:12.464 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:12.464 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:12.464 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:12.464 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:12.464 [248/265] Linking target lib/librte_mbuf.so.24.0 00:03:12.722 [249/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:12.722 [250/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.722 [251/265] Linking target lib/librte_net.so.24.0 00:03:12.722 [252/265] Linking target lib/librte_compressdev.so.24.0 00:03:12.722 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:12.722 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:03:12.981 [255/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.981 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:12.981 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:12.981 [258/265] Linking target lib/librte_hash.so.24.0 00:03:12.981 [259/265] Linking target lib/librte_cmdline.so.24.0 00:03:12.981 [260/265] Linking target lib/librte_ethdev.so.24.0 00:03:12.981 [261/265] Linking target lib/librte_security.so.24.0 00:03:13.239 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:13.239 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:13.239 [264/265] Linking target lib/librte_power.so.24.0 00:03:13.239 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:13.239 INFO: autodetecting backend as ninja 00:03:13.239 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:15.787 CC lib/ut_mock/mock.o 00:03:15.787 CC lib/log/log.o 00:03:15.787 CC lib/log/log_flags.o 00:03:15.787 CC lib/log/log_deprecated.o 00:03:15.787 CC lib/ut/ut.o 00:03:15.787 LIB libspdk_ut_mock.a 00:03:15.787 LIB libspdk_log.a 00:03:15.787 SO libspdk_ut_mock.so.5.0 00:03:15.787 LIB libspdk_ut.a 00:03:15.787 SO libspdk_ut.so.1.0 00:03:15.787 SO libspdk_log.so.6.1 00:03:15.787 SYMLINK libspdk_ut_mock.so 00:03:15.787 SYMLINK libspdk_ut.so 00:03:15.787 SYMLINK libspdk_log.so 00:03:15.787 CC lib/dma/dma.o 00:03:15.787 CXX lib/trace_parser/trace.o 00:03:15.787 CC lib/ioat/ioat.o 00:03:15.787 CC lib/util/bit_array.o 00:03:15.787 CC lib/util/base64.o 00:03:15.787 CC lib/util/cpuset.o 00:03:15.787 CC lib/util/crc16.o 00:03:15.787 CC lib/util/crc32.o 00:03:15.787 CC lib/util/crc32c.o 00:03:15.787 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.045 CC lib/util/crc32_ieee.o 00:03:16.045 CC lib/util/crc64.o 00:03:16.045 CC lib/util/dif.o 00:03:16.045 CC lib/util/fd.o 00:03:16.045 LIB libspdk_dma.a 00:03:16.045 CC lib/util/file.o 00:03:16.045 CC lib/util/hexlify.o 00:03:16.045 SO libspdk_dma.so.3.0 00:03:16.045 CC lib/vfio_user/host/vfio_user.o 00:03:16.045 CC lib/util/iov.o 00:03:16.045 LIB libspdk_ioat.a 00:03:16.045 CC lib/util/math.o 00:03:16.304 SYMLINK libspdk_dma.so 00:03:16.304 CC lib/util/pipe.o 00:03:16.304 SO libspdk_ioat.so.6.0 00:03:16.304 CC lib/util/strerror_tls.o 00:03:16.304 CC lib/util/string.o 00:03:16.304 CC lib/util/uuid.o 00:03:16.304 SYMLINK libspdk_ioat.so 00:03:16.304 CC lib/util/fd_group.o 00:03:16.304 CC lib/util/xor.o 00:03:16.304 CC lib/util/zipf.o 00:03:16.304 LIB libspdk_vfio_user.a 00:03:16.304 SO libspdk_vfio_user.so.4.0 00:03:16.563 SYMLINK libspdk_vfio_user.so 00:03:16.563 LIB libspdk_util.a 00:03:16.563 SO libspdk_util.so.8.0 00:03:16.821 SYMLINK libspdk_util.so 00:03:16.821 LIB libspdk_trace_parser.a 00:03:16.821 SO libspdk_trace_parser.so.4.0 00:03:16.821 CC lib/env_dpdk/env.o 00:03:16.821 CC lib/env_dpdk/memory.o 00:03:16.821 CC lib/env_dpdk/pci.o 00:03:16.821 CC lib/env_dpdk/init.o 00:03:16.821 CC lib/rdma/common.o 00:03:16.821 CC lib/conf/conf.o 00:03:16.821 CC lib/idxd/idxd.o 00:03:16.821 CC lib/json/json_parse.o 00:03:16.821 CC lib/vmd/vmd.o 00:03:17.081 SYMLINK libspdk_trace_parser.so 00:03:17.081 CC lib/vmd/led.o 00:03:17.081 LIB libspdk_conf.a 00:03:17.081 CC lib/json/json_util.o 00:03:17.081 CC lib/json/json_write.o 00:03:17.081 SO libspdk_conf.so.5.0 00:03:17.340 CC lib/rdma/rdma_verbs.o 00:03:17.340 SYMLINK libspdk_conf.so 00:03:17.340 CC lib/idxd/idxd_user.o 00:03:17.340 CC lib/env_dpdk/threads.o 00:03:17.340 CC lib/idxd/idxd_kernel.o 00:03:17.340 CC lib/env_dpdk/pci_ioat.o 00:03:17.340 LIB libspdk_rdma.a 00:03:17.340 CC lib/env_dpdk/pci_virtio.o 00:03:17.340 LIB libspdk_json.a 00:03:17.340 SO libspdk_rdma.so.5.0 00:03:17.598 CC lib/env_dpdk/pci_vmd.o 00:03:17.598 LIB libspdk_idxd.a 00:03:17.598 CC lib/env_dpdk/pci_idxd.o 00:03:17.598 SO libspdk_json.so.5.1 00:03:17.598 SYMLINK libspdk_rdma.so 00:03:17.598 CC lib/env_dpdk/pci_event.o 00:03:17.598 SO libspdk_idxd.so.11.0 00:03:17.598 CC lib/env_dpdk/sigbus_handler.o 00:03:17.598 SYMLINK libspdk_json.so 00:03:17.598 CC lib/env_dpdk/pci_dpdk.o 00:03:17.598 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.598 LIB libspdk_vmd.a 00:03:17.598 SYMLINK libspdk_idxd.so 00:03:17.598 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.598 SO libspdk_vmd.so.5.0 00:03:17.598 SYMLINK libspdk_vmd.so 00:03:17.598 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.598 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.598 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.598 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.856 LIB libspdk_jsonrpc.a 00:03:18.114 SO libspdk_jsonrpc.so.5.1 00:03:18.114 SYMLINK libspdk_jsonrpc.so 00:03:18.372 CC lib/rpc/rpc.o 00:03:18.372 LIB libspdk_env_dpdk.a 00:03:18.372 SO libspdk_env_dpdk.so.13.0 00:03:18.372 LIB libspdk_rpc.a 00:03:18.630 SO libspdk_rpc.so.5.0 00:03:18.630 SYMLINK libspdk_rpc.so 00:03:18.630 SYMLINK libspdk_env_dpdk.so 00:03:18.630 CC lib/sock/sock.o 00:03:18.630 CC lib/sock/sock_rpc.o 00:03:18.630 CC lib/notify/notify.o 00:03:18.630 CC lib/notify/notify_rpc.o 00:03:18.631 CC lib/trace/trace.o 00:03:18.631 CC lib/trace/trace_flags.o 00:03:18.631 CC lib/trace/trace_rpc.o 00:03:18.889 LIB libspdk_notify.a 00:03:18.889 SO libspdk_notify.so.5.0 00:03:18.889 LIB libspdk_trace.a 00:03:18.889 SYMLINK libspdk_notify.so 00:03:19.148 SO libspdk_trace.so.9.0 00:03:19.148 SYMLINK libspdk_trace.so 00:03:19.148 LIB libspdk_sock.a 00:03:19.148 SO libspdk_sock.so.8.0 00:03:19.148 CC lib/thread/thread.o 00:03:19.148 CC lib/thread/iobuf.o 00:03:19.407 SYMLINK libspdk_sock.so 00:03:19.407 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.407 CC lib/nvme/nvme_ctrlr.o 00:03:19.407 CC lib/nvme/nvme_ns_cmd.o 00:03:19.407 CC lib/nvme/nvme_fabric.o 00:03:19.407 CC lib/nvme/nvme_qpair.o 00:03:19.407 CC lib/nvme/nvme_pcie_common.o 00:03:19.407 CC lib/nvme/nvme_ns.o 00:03:19.407 CC lib/nvme/nvme_pcie.o 00:03:19.666 CC lib/nvme/nvme.o 00:03:20.233 CC lib/nvme/nvme_quirks.o 00:03:20.233 CC lib/nvme/nvme_transport.o 00:03:20.233 CC lib/nvme/nvme_discovery.o 00:03:20.492 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.492 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.492 CC lib/nvme/nvme_tcp.o 00:03:20.492 CC lib/nvme/nvme_opal.o 00:03:20.750 CC lib/nvme/nvme_io_msg.o 00:03:20.750 LIB libspdk_thread.a 00:03:20.750 SO libspdk_thread.so.9.0 00:03:20.750 CC lib/nvme/nvme_poll_group.o 00:03:20.750 SYMLINK libspdk_thread.so 00:03:20.750 CC lib/nvme/nvme_zns.o 00:03:21.009 CC lib/nvme/nvme_cuse.o 00:03:21.009 CC lib/nvme/nvme_vfio_user.o 00:03:21.009 CC lib/accel/accel.o 00:03:21.324 CC lib/blob/blobstore.o 00:03:21.324 CC lib/init/json_config.o 00:03:21.324 CC lib/init/subsystem.o 00:03:21.600 CC lib/blob/request.o 00:03:21.600 CC lib/init/subsystem_rpc.o 00:03:21.600 CC lib/init/rpc.o 00:03:21.600 CC lib/blob/zeroes.o 00:03:21.600 CC lib/blob/blob_bs_dev.o 00:03:21.600 LIB libspdk_init.a 00:03:21.600 CC lib/accel/accel_rpc.o 00:03:21.600 SO libspdk_init.so.4.0 00:03:21.858 CC lib/nvme/nvme_rdma.o 00:03:21.858 CC lib/virtio/virtio.o 00:03:21.858 CC lib/virtio/virtio_vhost_user.o 00:03:21.858 SYMLINK libspdk_init.so 00:03:21.858 CC lib/accel/accel_sw.o 00:03:21.858 CC lib/virtio/virtio_vfio_user.o 00:03:21.858 CC lib/virtio/virtio_pci.o 00:03:21.858 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.858 CC lib/vfu_tgt/tgt_rpc.o 00:03:22.117 LIB libspdk_accel.a 00:03:22.117 SO libspdk_accel.so.14.0 00:03:22.117 LIB libspdk_virtio.a 00:03:22.117 CC lib/event/reactor.o 00:03:22.117 CC lib/event/log_rpc.o 00:03:22.117 CC lib/event/app.o 00:03:22.117 CC lib/event/app_rpc.o 00:03:22.117 CC lib/event/scheduler_static.o 00:03:22.117 LIB libspdk_vfu_tgt.a 00:03:22.375 SO libspdk_virtio.so.6.0 00:03:22.375 SYMLINK libspdk_accel.so 00:03:22.375 SO libspdk_vfu_tgt.so.2.0 00:03:22.375 SYMLINK libspdk_virtio.so 00:03:22.375 SYMLINK libspdk_vfu_tgt.so 00:03:22.375 CC lib/bdev/bdev_rpc.o 00:03:22.375 CC lib/bdev/scsi_nvme.o 00:03:22.375 CC lib/bdev/bdev.o 00:03:22.375 CC lib/bdev/part.o 00:03:22.375 CC lib/bdev/bdev_zone.o 00:03:22.635 LIB libspdk_event.a 00:03:22.635 SO libspdk_event.so.12.0 00:03:22.635 SYMLINK libspdk_event.so 00:03:23.202 LIB libspdk_nvme.a 00:03:23.202 SO libspdk_nvme.so.12.0 00:03:23.460 SYMLINK libspdk_nvme.so 00:03:24.028 LIB libspdk_blob.a 00:03:24.287 SO libspdk_blob.so.10.1 00:03:24.287 SYMLINK libspdk_blob.so 00:03:24.546 CC lib/lvol/lvol.o 00:03:24.546 CC lib/blobfs/blobfs.o 00:03:24.546 CC lib/blobfs/tree.o 00:03:24.805 LIB libspdk_bdev.a 00:03:25.064 SO libspdk_bdev.so.14.0 00:03:25.064 SYMLINK libspdk_bdev.so 00:03:25.323 CC lib/ublk/ublk.o 00:03:25.323 CC lib/ublk/ublk_rpc.o 00:03:25.323 CC lib/scsi/dev.o 00:03:25.323 CC lib/nvmf/ctrlr.o 00:03:25.323 CC lib/scsi/lun.o 00:03:25.323 CC lib/nvmf/ctrlr_discovery.o 00:03:25.323 CC lib/nbd/nbd.o 00:03:25.323 CC lib/ftl/ftl_core.o 00:03:25.323 LIB libspdk_blobfs.a 00:03:25.323 SO libspdk_blobfs.so.9.0 00:03:25.323 LIB libspdk_lvol.a 00:03:25.323 SO libspdk_lvol.so.9.1 00:03:25.582 CC lib/nbd/nbd_rpc.o 00:03:25.582 SYMLINK libspdk_blobfs.so 00:03:25.582 CC lib/ftl/ftl_init.o 00:03:25.582 SYMLINK libspdk_lvol.so 00:03:25.582 CC lib/ftl/ftl_layout.o 00:03:25.582 CC lib/ftl/ftl_debug.o 00:03:25.582 CC lib/scsi/port.o 00:03:25.582 CC lib/scsi/scsi.o 00:03:25.582 LIB libspdk_nbd.a 00:03:25.841 CC lib/scsi/scsi_bdev.o 00:03:25.841 SO libspdk_nbd.so.6.0 00:03:25.841 CC lib/scsi/scsi_pr.o 00:03:25.841 CC lib/nvmf/ctrlr_bdev.o 00:03:25.841 CC lib/nvmf/subsystem.o 00:03:25.841 CC lib/ftl/ftl_io.o 00:03:25.841 SYMLINK libspdk_nbd.so 00:03:25.841 CC lib/ftl/ftl_sb.o 00:03:25.841 CC lib/scsi/scsi_rpc.o 00:03:25.841 LIB libspdk_ublk.a 00:03:25.841 SO libspdk_ublk.so.2.0 00:03:25.841 CC lib/ftl/ftl_l2p.o 00:03:26.100 SYMLINK libspdk_ublk.so 00:03:26.100 CC lib/ftl/ftl_l2p_flat.o 00:03:26.100 CC lib/ftl/ftl_nv_cache.o 00:03:26.100 CC lib/nvmf/nvmf.o 00:03:26.100 CC lib/nvmf/nvmf_rpc.o 00:03:26.100 CC lib/scsi/task.o 00:03:26.100 CC lib/ftl/ftl_band.o 00:03:26.100 CC lib/ftl/ftl_band_ops.o 00:03:26.100 CC lib/ftl/ftl_writer.o 00:03:26.359 LIB libspdk_scsi.a 00:03:26.359 SO libspdk_scsi.so.8.0 00:03:26.618 CC lib/nvmf/transport.o 00:03:26.618 CC lib/ftl/ftl_rq.o 00:03:26.618 SYMLINK libspdk_scsi.so 00:03:26.618 CC lib/ftl/ftl_reloc.o 00:03:26.618 CC lib/iscsi/conn.o 00:03:26.618 CC lib/vhost/vhost.o 00:03:26.618 CC lib/vhost/vhost_rpc.o 00:03:26.877 CC lib/nvmf/tcp.o 00:03:26.877 CC lib/ftl/ftl_l2p_cache.o 00:03:26.877 CC lib/ftl/ftl_p2l.o 00:03:26.877 CC lib/nvmf/vfio_user.o 00:03:27.136 CC lib/nvmf/rdma.o 00:03:27.136 CC lib/iscsi/init_grp.o 00:03:27.136 CC lib/iscsi/iscsi.o 00:03:27.394 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.394 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.394 CC lib/iscsi/md5.o 00:03:27.394 CC lib/vhost/vhost_scsi.o 00:03:27.394 CC lib/iscsi/param.o 00:03:27.394 CC lib/iscsi/portal_grp.o 00:03:27.653 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.653 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.653 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.653 CC lib/vhost/vhost_blk.o 00:03:27.653 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.653 CC lib/iscsi/tgt_node.o 00:03:27.653 CC lib/vhost/rte_vhost_user.o 00:03:27.926 CC lib/iscsi/iscsi_subsystem.o 00:03:28.196 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.196 CC lib/iscsi/iscsi_rpc.o 00:03:28.196 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.455 CC lib/iscsi/task.o 00:03:28.455 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.455 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.455 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.455 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.455 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.455 LIB libspdk_iscsi.a 00:03:28.714 CC lib/ftl/utils/ftl_conf.o 00:03:28.714 SO libspdk_iscsi.so.7.0 00:03:28.714 CC lib/ftl/utils/ftl_md.o 00:03:28.714 CC lib/ftl/utils/ftl_mempool.o 00:03:28.714 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.714 CC lib/ftl/utils/ftl_property.o 00:03:28.714 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.714 SYMLINK libspdk_iscsi.so 00:03:28.714 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.971 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.971 LIB libspdk_vhost.a 00:03:28.971 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.971 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.971 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.971 SO libspdk_vhost.so.7.1 00:03:28.971 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.971 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.971 LIB libspdk_nvmf.a 00:03:28.971 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.228 SYMLINK libspdk_vhost.so 00:03:29.228 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.228 CC lib/ftl/base/ftl_base_dev.o 00:03:29.229 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.229 CC lib/ftl/ftl_trace.o 00:03:29.229 SO libspdk_nvmf.so.17.0 00:03:29.487 LIB libspdk_ftl.a 00:03:29.487 SYMLINK libspdk_nvmf.so 00:03:29.745 SO libspdk_ftl.so.8.0 00:03:30.004 SYMLINK libspdk_ftl.so 00:03:30.263 CC module/vfu_device/vfu_virtio.o 00:03:30.263 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.263 CC module/blob/bdev/blob_bdev.o 00:03:30.263 CC module/accel/dsa/accel_dsa.o 00:03:30.263 CC module/accel/ioat/accel_ioat.o 00:03:30.263 CC module/accel/iaa/accel_iaa.o 00:03:30.263 CC module/accel/error/accel_error.o 00:03:30.263 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.263 CC module/sock/posix/posix.o 00:03:30.263 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.263 LIB libspdk_env_dpdk_rpc.a 00:03:30.263 SO libspdk_env_dpdk_rpc.so.5.0 00:03:30.522 CC module/accel/error/accel_error_rpc.o 00:03:30.522 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.522 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.522 LIB libspdk_scheduler_dpdk_governor.a 00:03:30.522 CC module/accel/dsa/accel_dsa_rpc.o 00:03:30.522 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.522 LIB libspdk_scheduler_dynamic.a 00:03:30.522 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:30.522 SO libspdk_scheduler_dynamic.so.3.0 00:03:30.522 LIB libspdk_blob_bdev.a 00:03:30.523 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:30.523 CC module/vfu_device/vfu_virtio_blk.o 00:03:30.523 SO libspdk_blob_bdev.so.10.1 00:03:30.523 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.523 LIB libspdk_accel_error.a 00:03:30.523 CC module/vfu_device/vfu_virtio_scsi.o 00:03:30.523 LIB libspdk_accel_ioat.a 00:03:30.523 SYMLINK libspdk_blob_bdev.so 00:03:30.523 LIB libspdk_accel_dsa.a 00:03:30.523 SO libspdk_accel_error.so.1.0 00:03:30.523 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.523 SO libspdk_accel_ioat.so.5.0 00:03:30.523 SO libspdk_accel_dsa.so.4.0 00:03:30.523 LIB libspdk_accel_iaa.a 00:03:30.523 SO libspdk_accel_iaa.so.2.0 00:03:30.523 SYMLINK libspdk_accel_ioat.so 00:03:30.781 SYMLINK libspdk_accel_error.so 00:03:30.781 SYMLINK libspdk_accel_dsa.so 00:03:30.781 SYMLINK libspdk_accel_iaa.so 00:03:30.781 CC module/bdev/delay/vbdev_delay.o 00:03:30.781 LIB libspdk_scheduler_gscheduler.a 00:03:30.781 CC module/bdev/error/vbdev_error.o 00:03:30.781 SO libspdk_scheduler_gscheduler.so.3.0 00:03:30.781 CC module/bdev/gpt/gpt.o 00:03:30.781 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.781 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.781 SYMLINK libspdk_scheduler_gscheduler.so 00:03:30.781 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.781 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.781 CC module/bdev/malloc/bdev_malloc.o 00:03:31.040 CC module/vfu_device/vfu_virtio_rpc.o 00:03:31.040 LIB libspdk_sock_posix.a 00:03:31.040 SO libspdk_sock_posix.so.5.0 00:03:31.040 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.040 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.040 SYMLINK libspdk_sock_posix.so 00:03:31.040 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.040 LIB libspdk_blobfs_bdev.a 00:03:31.040 LIB libspdk_bdev_error.a 00:03:31.040 SO libspdk_blobfs_bdev.so.5.0 00:03:31.040 SO libspdk_bdev_error.so.5.0 00:03:31.040 LIB libspdk_vfu_device.a 00:03:31.040 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.040 SYMLINK libspdk_blobfs_bdev.so 00:03:31.040 CC module/bdev/null/bdev_null.o 00:03:31.298 SYMLINK libspdk_bdev_error.so 00:03:31.298 SO libspdk_vfu_device.so.2.0 00:03:31.298 SYMLINK libspdk_vfu_device.so 00:03:31.298 LIB libspdk_bdev_gpt.a 00:03:31.298 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.298 CC module/bdev/nvme/bdev_nvme.o 00:03:31.298 LIB libspdk_bdev_delay.a 00:03:31.298 SO libspdk_bdev_gpt.so.5.0 00:03:31.298 LIB libspdk_bdev_malloc.a 00:03:31.298 CC module/bdev/raid/bdev_raid.o 00:03:31.298 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.298 SO libspdk_bdev_delay.so.5.0 00:03:31.298 SO libspdk_bdev_malloc.so.5.0 00:03:31.298 SYMLINK libspdk_bdev_gpt.so 00:03:31.298 LIB libspdk_bdev_lvol.a 00:03:31.298 CC module/bdev/split/vbdev_split.o 00:03:31.298 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.298 SYMLINK libspdk_bdev_delay.so 00:03:31.298 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.298 SO libspdk_bdev_lvol.so.5.0 00:03:31.556 SYMLINK libspdk_bdev_malloc.so 00:03:31.556 CC module/bdev/nvme/nvme_rpc.o 00:03:31.556 CC module/bdev/null/bdev_null_rpc.o 00:03:31.556 SYMLINK libspdk_bdev_lvol.so 00:03:31.556 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.556 LIB libspdk_bdev_passthru.a 00:03:31.556 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.556 LIB libspdk_bdev_null.a 00:03:31.556 SO libspdk_bdev_passthru.so.5.0 00:03:31.556 CC module/bdev/aio/bdev_aio.o 00:03:31.556 LIB libspdk_bdev_split.a 00:03:31.556 SO libspdk_bdev_null.so.5.0 00:03:31.814 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.814 SYMLINK libspdk_bdev_passthru.so 00:03:31.814 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.814 SO libspdk_bdev_split.so.5.0 00:03:31.814 SYMLINK libspdk_bdev_null.so 00:03:31.814 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.814 SYMLINK libspdk_bdev_split.so 00:03:31.814 CC module/bdev/raid/raid0.o 00:03:32.072 CC module/bdev/ftl/bdev_ftl.o 00:03:32.072 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.072 LIB libspdk_bdev_zone_block.a 00:03:32.072 LIB libspdk_bdev_aio.a 00:03:32.072 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.072 SO libspdk_bdev_zone_block.so.5.0 00:03:32.072 SO libspdk_bdev_aio.so.5.0 00:03:32.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.072 SYMLINK libspdk_bdev_zone_block.so 00:03:32.072 SYMLINK libspdk_bdev_aio.so 00:03:32.072 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.072 CC module/bdev/nvme/vbdev_opal.o 00:03:32.072 CC module/bdev/raid/raid1.o 00:03:32.331 CC module/bdev/raid/concat.o 00:03:32.331 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.331 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.331 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.331 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.589 LIB libspdk_bdev_raid.a 00:03:32.589 LIB libspdk_bdev_ftl.a 00:03:32.589 LIB libspdk_bdev_iscsi.a 00:03:32.589 SO libspdk_bdev_raid.so.5.0 00:03:32.589 SO libspdk_bdev_ftl.so.5.0 00:03:32.589 LIB libspdk_bdev_virtio.a 00:03:32.589 SO libspdk_bdev_iscsi.so.5.0 00:03:32.589 SO libspdk_bdev_virtio.so.5.0 00:03:32.589 SYMLINK libspdk_bdev_ftl.so 00:03:32.589 SYMLINK libspdk_bdev_raid.so 00:03:32.589 SYMLINK libspdk_bdev_iscsi.so 00:03:32.589 SYMLINK libspdk_bdev_virtio.so 00:03:33.524 LIB libspdk_bdev_nvme.a 00:03:33.524 SO libspdk_bdev_nvme.so.6.0 00:03:33.524 SYMLINK libspdk_bdev_nvme.so 00:03:34.091 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.091 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.091 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.091 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.091 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:34.091 CC module/event/subsystems/vmd/vmd.o 00:03:34.091 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.091 CC module/event/subsystems/sock/sock.o 00:03:34.091 LIB libspdk_event_scheduler.a 00:03:34.091 LIB libspdk_event_vfu_tgt.a 00:03:34.091 LIB libspdk_event_sock.a 00:03:34.091 LIB libspdk_event_vhost_blk.a 00:03:34.091 LIB libspdk_event_vmd.a 00:03:34.091 LIB libspdk_event_iobuf.a 00:03:34.091 SO libspdk_event_scheduler.so.3.0 00:03:34.091 SO libspdk_event_sock.so.4.0 00:03:34.091 SO libspdk_event_vhost_blk.so.2.0 00:03:34.091 SO libspdk_event_vfu_tgt.so.2.0 00:03:34.091 SO libspdk_event_vmd.so.5.0 00:03:34.091 SO libspdk_event_iobuf.so.2.0 00:03:34.091 SYMLINK libspdk_event_scheduler.so 00:03:34.091 SYMLINK libspdk_event_sock.so 00:03:34.091 SYMLINK libspdk_event_vfu_tgt.so 00:03:34.091 SYMLINK libspdk_event_vhost_blk.so 00:03:34.091 SYMLINK libspdk_event_vmd.so 00:03:34.349 SYMLINK libspdk_event_iobuf.so 00:03:34.349 CC module/event/subsystems/accel/accel.o 00:03:34.607 LIB libspdk_event_accel.a 00:03:34.607 SO libspdk_event_accel.so.5.0 00:03:34.607 SYMLINK libspdk_event_accel.so 00:03:34.877 CC module/event/subsystems/bdev/bdev.o 00:03:35.140 LIB libspdk_event_bdev.a 00:03:35.140 SO libspdk_event_bdev.so.5.0 00:03:35.140 SYMLINK libspdk_event_bdev.so 00:03:35.398 CC module/event/subsystems/ublk/ublk.o 00:03:35.398 CC module/event/subsystems/nbd/nbd.o 00:03:35.398 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:35.398 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:35.398 CC module/event/subsystems/scsi/scsi.o 00:03:35.656 LIB libspdk_event_ublk.a 00:03:35.656 LIB libspdk_event_nbd.a 00:03:35.656 SO libspdk_event_nbd.so.5.0 00:03:35.656 SO libspdk_event_ublk.so.2.0 00:03:35.656 LIB libspdk_event_scsi.a 00:03:35.656 SO libspdk_event_scsi.so.5.0 00:03:35.656 LIB libspdk_event_nvmf.a 00:03:35.656 SYMLINK libspdk_event_nbd.so 00:03:35.656 SYMLINK libspdk_event_ublk.so 00:03:35.656 SO libspdk_event_nvmf.so.5.0 00:03:35.656 SYMLINK libspdk_event_scsi.so 00:03:35.656 SYMLINK libspdk_event_nvmf.so 00:03:35.915 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.915 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.915 LIB libspdk_event_vhost_scsi.a 00:03:36.172 SO libspdk_event_vhost_scsi.so.2.0 00:03:36.172 LIB libspdk_event_iscsi.a 00:03:36.172 SO libspdk_event_iscsi.so.5.0 00:03:36.172 SYMLINK libspdk_event_vhost_scsi.so 00:03:36.172 SYMLINK libspdk_event_iscsi.so 00:03:36.172 SO libspdk.so.5.0 00:03:36.172 SYMLINK libspdk.so 00:03:36.430 CC app/trace_record/trace_record.o 00:03:36.430 CXX app/trace/trace.o 00:03:36.430 CC app/spdk_lspci/spdk_lspci.o 00:03:36.430 CC app/nvmf_tgt/nvmf_main.o 00:03:36.430 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.430 CC examples/accel/perf/accel_perf.o 00:03:36.430 CC test/accel/dif/dif.o 00:03:36.430 CC app/spdk_tgt/spdk_tgt.o 00:03:36.430 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.430 CC examples/blob/hello_world/hello_blob.o 00:03:36.687 LINK spdk_lspci 00:03:36.687 LINK nvmf_tgt 00:03:36.687 LINK spdk_trace_record 00:03:36.687 LINK iscsi_tgt 00:03:36.687 LINK spdk_tgt 00:03:36.946 LINK hello_bdev 00:03:36.946 LINK hello_blob 00:03:36.946 CC app/spdk_nvme_perf/perf.o 00:03:36.946 LINK spdk_trace 00:03:36.946 LINK dif 00:03:36.946 LINK accel_perf 00:03:36.946 CC app/spdk_nvme_identify/identify.o 00:03:37.202 CC test/app/bdev_svc/bdev_svc.o 00:03:37.202 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.202 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:37.202 CC test/app/histogram_perf/histogram_perf.o 00:03:37.202 CC examples/blob/cli/blobcli.o 00:03:37.202 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.202 LINK bdev_svc 00:03:37.459 CC test/bdev/bdevio/bdevio.o 00:03:37.459 LINK histogram_perf 00:03:37.459 CC test/blobfs/mkfs/mkfs.o 00:03:37.459 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.459 TEST_HEADER include/spdk/accel.h 00:03:37.459 TEST_HEADER include/spdk/accel_module.h 00:03:37.459 TEST_HEADER include/spdk/assert.h 00:03:37.459 TEST_HEADER include/spdk/barrier.h 00:03:37.459 TEST_HEADER include/spdk/base64.h 00:03:37.459 TEST_HEADER include/spdk/bdev.h 00:03:37.459 TEST_HEADER include/spdk/bdev_module.h 00:03:37.459 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.459 LINK nvme_fuzz 00:03:37.459 TEST_HEADER include/spdk/bit_array.h 00:03:37.459 TEST_HEADER include/spdk/bit_pool.h 00:03:37.459 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.459 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.459 TEST_HEADER include/spdk/blobfs.h 00:03:37.459 LINK mkfs 00:03:37.459 TEST_HEADER include/spdk/blob.h 00:03:37.459 TEST_HEADER include/spdk/conf.h 00:03:37.459 TEST_HEADER include/spdk/config.h 00:03:37.459 TEST_HEADER include/spdk/cpuset.h 00:03:37.459 TEST_HEADER include/spdk/crc16.h 00:03:37.717 TEST_HEADER include/spdk/crc32.h 00:03:37.717 TEST_HEADER include/spdk/crc64.h 00:03:37.717 TEST_HEADER include/spdk/dif.h 00:03:37.717 TEST_HEADER include/spdk/dma.h 00:03:37.717 TEST_HEADER include/spdk/endian.h 00:03:37.717 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.717 TEST_HEADER include/spdk/env.h 00:03:37.717 TEST_HEADER include/spdk/event.h 00:03:37.717 TEST_HEADER include/spdk/fd_group.h 00:03:37.717 TEST_HEADER include/spdk/fd.h 00:03:37.717 TEST_HEADER include/spdk/file.h 00:03:37.717 TEST_HEADER include/spdk/ftl.h 00:03:37.717 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.717 TEST_HEADER include/spdk/hexlify.h 00:03:37.717 TEST_HEADER include/spdk/histogram_data.h 00:03:37.717 TEST_HEADER include/spdk/idxd.h 00:03:37.717 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.717 TEST_HEADER include/spdk/init.h 00:03:37.717 TEST_HEADER include/spdk/ioat.h 00:03:37.717 TEST_HEADER include/spdk/ioat_spec.h 00:03:37.717 TEST_HEADER include/spdk/iscsi_spec.h 00:03:37.717 TEST_HEADER include/spdk/json.h 00:03:37.717 TEST_HEADER include/spdk/jsonrpc.h 00:03:37.717 TEST_HEADER include/spdk/likely.h 00:03:37.717 TEST_HEADER include/spdk/log.h 00:03:37.717 TEST_HEADER include/spdk/lvol.h 00:03:37.717 TEST_HEADER include/spdk/memory.h 00:03:37.717 TEST_HEADER include/spdk/mmio.h 00:03:37.717 TEST_HEADER include/spdk/nbd.h 00:03:37.717 TEST_HEADER include/spdk/notify.h 00:03:37.717 TEST_HEADER include/spdk/nvme.h 00:03:37.717 TEST_HEADER include/spdk/nvme_intel.h 00:03:37.717 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:37.717 LINK spdk_nvme_perf 00:03:37.717 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:37.717 TEST_HEADER include/spdk/nvme_spec.h 00:03:37.717 TEST_HEADER include/spdk/nvme_zns.h 00:03:37.717 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:37.717 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:37.717 TEST_HEADER include/spdk/nvmf.h 00:03:37.717 TEST_HEADER include/spdk/nvmf_spec.h 00:03:37.717 TEST_HEADER include/spdk/nvmf_transport.h 00:03:37.717 TEST_HEADER include/spdk/opal.h 00:03:37.717 TEST_HEADER include/spdk/opal_spec.h 00:03:37.717 TEST_HEADER include/spdk/pci_ids.h 00:03:37.717 TEST_HEADER include/spdk/pipe.h 00:03:37.717 TEST_HEADER include/spdk/queue.h 00:03:37.717 TEST_HEADER include/spdk/reduce.h 00:03:37.717 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.717 TEST_HEADER include/spdk/rpc.h 00:03:37.717 TEST_HEADER include/spdk/scheduler.h 00:03:37.717 TEST_HEADER include/spdk/scsi.h 00:03:37.717 TEST_HEADER include/spdk/scsi_spec.h 00:03:37.717 TEST_HEADER include/spdk/sock.h 00:03:37.717 TEST_HEADER include/spdk/stdinc.h 00:03:37.717 TEST_HEADER include/spdk/string.h 00:03:37.717 TEST_HEADER include/spdk/thread.h 00:03:37.717 LINK blobcli 00:03:37.717 TEST_HEADER include/spdk/trace.h 00:03:37.717 TEST_HEADER include/spdk/trace_parser.h 00:03:37.717 TEST_HEADER include/spdk/tree.h 00:03:37.717 TEST_HEADER include/spdk/ublk.h 00:03:37.717 TEST_HEADER include/spdk/util.h 00:03:37.717 TEST_HEADER include/spdk/uuid.h 00:03:37.717 LINK bdevio 00:03:37.717 TEST_HEADER include/spdk/version.h 00:03:37.717 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:37.717 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:37.717 TEST_HEADER include/spdk/vhost.h 00:03:37.717 TEST_HEADER include/spdk/vmd.h 00:03:37.717 TEST_HEADER include/spdk/xor.h 00:03:37.717 TEST_HEADER include/spdk/zipf.h 00:03:37.717 CXX test/cpp_headers/accel.o 00:03:37.717 CXX test/cpp_headers/accel_module.o 00:03:37.975 CXX test/cpp_headers/assert.o 00:03:37.975 LINK spdk_nvme_identify 00:03:37.975 CC test/dma/test_dma/test_dma.o 00:03:37.975 CXX test/cpp_headers/barrier.o 00:03:37.975 LINK bdevperf 00:03:37.975 CC test/app/jsoncat/jsoncat.o 00:03:37.975 CC test/event/event_perf/event_perf.o 00:03:37.975 CC test/event/reactor/reactor.o 00:03:37.975 CXX test/cpp_headers/base64.o 00:03:38.233 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.233 LINK vhost_fuzz 00:03:38.233 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.233 LINK jsoncat 00:03:38.233 LINK event_perf 00:03:38.233 LINK reactor 00:03:38.233 CXX test/cpp_headers/bdev.o 00:03:38.233 LINK test_dma 00:03:38.233 CC examples/ioat/perf/perf.o 00:03:38.233 LINK spdk_nvme_discover 00:03:38.233 CC test/event/reactor_perf/reactor_perf.o 00:03:38.491 CXX test/cpp_headers/bdev_module.o 00:03:38.491 CXX test/cpp_headers/bdev_zone.o 00:03:38.491 CXX test/cpp_headers/bit_array.o 00:03:38.491 LINK reactor_perf 00:03:38.491 CC test/event/app_repeat/app_repeat.o 00:03:38.491 LINK ioat_perf 00:03:38.491 CC app/spdk_top/spdk_top.o 00:03:38.750 CXX test/cpp_headers/bit_pool.o 00:03:38.750 CC app/vhost/vhost.o 00:03:38.750 CC test/event/scheduler/scheduler.o 00:03:38.750 CC app/spdk_dd/spdk_dd.o 00:03:38.750 LINK app_repeat 00:03:38.750 CC examples/ioat/verify/verify.o 00:03:38.750 CC app/fio/nvme/fio_plugin.o 00:03:38.750 LINK mem_callbacks 00:03:38.750 CXX test/cpp_headers/blob_bdev.o 00:03:38.750 LINK vhost 00:03:39.008 LINK iscsi_fuzz 00:03:39.008 LINK scheduler 00:03:39.008 CC test/env/vtophys/vtophys.o 00:03:39.008 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.008 LINK verify 00:03:39.008 CC app/fio/bdev/fio_plugin.o 00:03:39.008 CXX test/cpp_headers/blobfs.o 00:03:39.267 LINK spdk_dd 00:03:39.267 LINK vtophys 00:03:39.267 CC test/app/stub/stub.o 00:03:39.267 CXX test/cpp_headers/blob.o 00:03:39.267 CC examples/nvme/reconnect/reconnect.o 00:03:39.267 CC examples/nvme/hello_world/hello_world.o 00:03:39.267 LINK spdk_nvme 00:03:39.267 CXX test/cpp_headers/conf.o 00:03:39.267 CC test/lvol/esnap/esnap.o 00:03:39.267 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.525 LINK stub 00:03:39.525 CXX test/cpp_headers/config.o 00:03:39.525 LINK spdk_top 00:03:39.525 CXX test/cpp_headers/cpuset.o 00:03:39.525 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.525 LINK spdk_bdev 00:03:39.525 LINK env_dpdk_post_init 00:03:39.525 LINK hello_world 00:03:39.525 CC examples/nvme/arbitration/arbitration.o 00:03:39.525 CXX test/cpp_headers/crc16.o 00:03:39.525 CXX test/cpp_headers/crc32.o 00:03:39.783 CXX test/cpp_headers/crc64.o 00:03:39.783 LINK reconnect 00:03:39.783 CC examples/sock/hello_world/hello_sock.o 00:03:39.783 CC test/env/memory/memory_ut.o 00:03:39.783 CC test/env/pci/pci_ut.o 00:03:39.783 CXX test/cpp_headers/dif.o 00:03:39.783 CC test/nvme/aer/aer.o 00:03:39.783 CC test/nvme/reset/reset.o 00:03:40.041 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.041 LINK arbitration 00:03:40.041 CXX test/cpp_headers/dma.o 00:03:40.041 LINK hello_sock 00:03:40.041 LINK nvme_manage 00:03:40.041 LINK lsvmd 00:03:40.041 CXX test/cpp_headers/endian.o 00:03:40.299 LINK reset 00:03:40.299 LINK aer 00:03:40.299 LINK pci_ut 00:03:40.300 CC examples/nvme/hotplug/hotplug.o 00:03:40.300 CC examples/vmd/led/led.o 00:03:40.300 CXX test/cpp_headers/env_dpdk.o 00:03:40.300 CC examples/util/zipf/zipf.o 00:03:40.300 CC examples/nvmf/nvmf/nvmf.o 00:03:40.300 CXX test/cpp_headers/env.o 00:03:40.300 CC test/nvme/sgl/sgl.o 00:03:40.557 LINK led 00:03:40.557 CXX test/cpp_headers/event.o 00:03:40.557 LINK zipf 00:03:40.557 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.557 LINK hotplug 00:03:40.557 CC test/rpc_client/rpc_client_test.o 00:03:40.557 CXX test/cpp_headers/fd_group.o 00:03:40.557 CXX test/cpp_headers/fd.o 00:03:40.815 LINK nvmf 00:03:40.815 LINK sgl 00:03:40.815 CXX test/cpp_headers/file.o 00:03:40.815 LINK memory_ut 00:03:40.815 LINK cmb_copy 00:03:40.815 CC examples/thread/thread/thread_ex.o 00:03:40.815 LINK rpc_client_test 00:03:40.815 CXX test/cpp_headers/ftl.o 00:03:40.815 CC examples/nvme/abort/abort.o 00:03:40.815 CXX test/cpp_headers/gpt_spec.o 00:03:41.074 CC test/nvme/e2edp/nvme_dp.o 00:03:41.074 CXX test/cpp_headers/hexlify.o 00:03:41.074 CC test/nvme/overhead/overhead.o 00:03:41.074 CXX test/cpp_headers/histogram_data.o 00:03:41.074 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:41.074 LINK thread 00:03:41.074 CC test/nvme/err_injection/err_injection.o 00:03:41.074 LINK pmr_persistence 00:03:41.074 CXX test/cpp_headers/idxd.o 00:03:41.074 CC examples/idxd/perf/perf.o 00:03:41.330 CC test/nvme/startup/startup.o 00:03:41.331 LINK nvme_dp 00:03:41.331 LINK abort 00:03:41.331 LINK overhead 00:03:41.331 CXX test/cpp_headers/idxd_spec.o 00:03:41.331 LINK err_injection 00:03:41.331 CXX test/cpp_headers/init.o 00:03:41.331 LINK startup 00:03:41.331 CXX test/cpp_headers/ioat.o 00:03:41.331 CC test/nvme/reserve/reserve.o 00:03:41.588 CXX test/cpp_headers/ioat_spec.o 00:03:41.588 CXX test/cpp_headers/iscsi_spec.o 00:03:41.588 CXX test/cpp_headers/json.o 00:03:41.588 LINK idxd_perf 00:03:41.588 CXX test/cpp_headers/jsonrpc.o 00:03:41.588 CC test/thread/poller_perf/poller_perf.o 00:03:41.588 CC test/nvme/simple_copy/simple_copy.o 00:03:41.588 LINK reserve 00:03:41.588 CXX test/cpp_headers/likely.o 00:03:41.588 CXX test/cpp_headers/log.o 00:03:41.846 CXX test/cpp_headers/lvol.o 00:03:41.846 CXX test/cpp_headers/memory.o 00:03:41.846 LINK poller_perf 00:03:41.846 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:41.846 CXX test/cpp_headers/mmio.o 00:03:41.846 CC test/nvme/connect_stress/connect_stress.o 00:03:41.846 LINK simple_copy 00:03:41.846 CXX test/cpp_headers/nbd.o 00:03:41.847 CXX test/cpp_headers/notify.o 00:03:41.847 CXX test/cpp_headers/nvme.o 00:03:41.847 CC test/nvme/boot_partition/boot_partition.o 00:03:42.156 LINK interrupt_tgt 00:03:42.156 CC test/nvme/compliance/nvme_compliance.o 00:03:42.156 CXX test/cpp_headers/nvme_intel.o 00:03:42.156 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.156 LINK connect_stress 00:03:42.156 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.156 LINK boot_partition 00:03:42.156 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.156 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.156 CC test/nvme/fdp/fdp.o 00:03:42.415 LINK fused_ordering 00:03:42.415 CXX test/cpp_headers/nvme_spec.o 00:03:42.415 CC test/nvme/cuse/cuse.o 00:03:42.415 CXX test/cpp_headers/nvme_zns.o 00:03:42.415 LINK nvme_compliance 00:03:42.415 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.415 LINK doorbell_aers 00:03:42.415 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.415 CXX test/cpp_headers/nvmf.o 00:03:42.415 CXX test/cpp_headers/nvmf_spec.o 00:03:42.673 CXX test/cpp_headers/nvmf_transport.o 00:03:42.673 CXX test/cpp_headers/opal.o 00:03:42.673 LINK fdp 00:03:42.673 CXX test/cpp_headers/opal_spec.o 00:03:42.932 CXX test/cpp_headers/pci_ids.o 00:03:42.932 CXX test/cpp_headers/pipe.o 00:03:42.932 CXX test/cpp_headers/queue.o 00:03:42.932 CXX test/cpp_headers/reduce.o 00:03:42.932 CXX test/cpp_headers/rpc.o 00:03:42.932 CXX test/cpp_headers/scheduler.o 00:03:42.932 CXX test/cpp_headers/scsi.o 00:03:42.932 CXX test/cpp_headers/scsi_spec.o 00:03:42.932 CXX test/cpp_headers/sock.o 00:03:42.932 CXX test/cpp_headers/stdinc.o 00:03:43.190 CXX test/cpp_headers/string.o 00:03:43.190 CXX test/cpp_headers/thread.o 00:03:43.190 CXX test/cpp_headers/trace.o 00:03:43.190 CXX test/cpp_headers/trace_parser.o 00:03:43.190 CXX test/cpp_headers/tree.o 00:03:43.190 CXX test/cpp_headers/ublk.o 00:03:43.190 CXX test/cpp_headers/util.o 00:03:43.190 CXX test/cpp_headers/uuid.o 00:03:43.190 CXX test/cpp_headers/version.o 00:03:43.190 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.190 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.190 CXX test/cpp_headers/vhost.o 00:03:43.449 CXX test/cpp_headers/vmd.o 00:03:43.449 CXX test/cpp_headers/xor.o 00:03:43.449 CXX test/cpp_headers/zipf.o 00:03:43.449 LINK cuse 00:03:44.016 LINK esnap 00:03:48.206 ************************************ 00:03:48.206 END TEST make 00:03:48.206 ************************************ 00:03:48.206 00:03:48.206 real 1m6.776s 00:03:48.206 user 6m54.914s 00:03:48.206 sys 1m42.101s 00:03:48.206 06:34:01 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:48.206 06:34:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:48.206 06:34:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.206 06:34:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.206 06:34:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.206 06:34:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.206 06:34:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.206 06:34:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.206 06:34:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.206 06:34:02 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.206 06:34:02 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.206 06:34:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.206 06:34:02 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.206 06:34:02 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.206 06:34:02 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.206 06:34:02 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.206 06:34:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.206 06:34:02 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.206 06:34:02 -- scripts/common.sh@344 -- # : 1 00:03:48.206 06:34:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.206 06:34:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.206 06:34:02 -- scripts/common.sh@364 -- # decimal 1 00:03:48.206 06:34:02 -- scripts/common.sh@352 -- # local d=1 00:03:48.206 06:34:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.206 06:34:02 -- scripts/common.sh@354 -- # echo 1 00:03:48.206 06:34:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.206 06:34:02 -- scripts/common.sh@365 -- # decimal 2 00:03:48.206 06:34:02 -- scripts/common.sh@352 -- # local d=2 00:03:48.206 06:34:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.206 06:34:02 -- scripts/common.sh@354 -- # echo 2 00:03:48.206 06:34:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.206 06:34:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.206 06:34:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.206 06:34:02 -- scripts/common.sh@367 -- # return 0 00:03:48.206 06:34:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.206 06:34:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.206 --rc genhtml_branch_coverage=1 00:03:48.206 --rc genhtml_function_coverage=1 00:03:48.206 --rc genhtml_legend=1 00:03:48.206 --rc geninfo_all_blocks=1 00:03:48.206 --rc geninfo_unexecuted_blocks=1 00:03:48.206 00:03:48.206 ' 00:03:48.206 06:34:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.206 --rc genhtml_branch_coverage=1 00:03:48.206 --rc genhtml_function_coverage=1 00:03:48.206 --rc genhtml_legend=1 00:03:48.206 --rc geninfo_all_blocks=1 00:03:48.206 --rc geninfo_unexecuted_blocks=1 00:03:48.206 00:03:48.206 ' 00:03:48.206 06:34:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.206 --rc genhtml_branch_coverage=1 00:03:48.206 --rc genhtml_function_coverage=1 00:03:48.206 --rc genhtml_legend=1 00:03:48.206 --rc geninfo_all_blocks=1 00:03:48.206 --rc geninfo_unexecuted_blocks=1 00:03:48.206 00:03:48.206 ' 00:03:48.206 06:34:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.206 --rc genhtml_branch_coverage=1 00:03:48.206 --rc genhtml_function_coverage=1 00:03:48.206 --rc genhtml_legend=1 00:03:48.206 --rc geninfo_all_blocks=1 00:03:48.206 --rc geninfo_unexecuted_blocks=1 00:03:48.206 00:03:48.206 ' 00:03:48.206 06:34:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.206 06:34:02 -- nvmf/common.sh@7 -- # uname -s 00:03:48.206 06:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.206 06:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.206 06:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.206 06:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.206 06:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.206 06:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.206 06:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.206 06:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.206 06:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.206 06:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.206 06:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:03:48.206 06:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:03:48.206 06:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.206 06:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.206 06:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:48.206 06:34:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.206 06:34:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.206 06:34:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.206 06:34:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.206 06:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.206 06:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.206 06:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.206 06:34:02 -- paths/export.sh@5 -- # export PATH 00:03:48.206 06:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.206 06:34:02 -- nvmf/common.sh@46 -- # : 0 00:03:48.206 06:34:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:48.206 06:34:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:48.206 06:34:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:48.206 06:34:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.206 06:34:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.206 06:34:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:48.206 06:34:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:48.206 06:34:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:48.207 06:34:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.207 06:34:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.207 06:34:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.207 06:34:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.207 06:34:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.207 06:34:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.207 06:34:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.207 06:34:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.207 06:34:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.207 06:34:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.207 06:34:02 -- spdk/autotest.sh@48 -- # udevadm_pid=49768 00:03:48.207 06:34:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.207 06:34:02 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:48.207 06:34:02 -- spdk/autotest.sh@54 -- # echo 49776 00:03:48.207 06:34:02 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:48.207 06:34:02 -- spdk/autotest.sh@56 -- # echo 49779 00:03:48.207 06:34:02 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:48.465 06:34:02 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:48.465 06:34:02 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:48.465 06:34:02 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:48.465 06:34:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.465 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:48.465 06:34:02 -- spdk/autotest.sh@70 -- # create_test_list 00:03:48.465 06:34:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:48.465 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:48.465 06:34:02 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:48.465 06:34:02 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:48.465 06:34:02 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:48.465 06:34:02 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:48.465 06:34:02 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:48.465 06:34:02 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:48.465 06:34:02 -- common/autotest_common.sh@1450 -- # uname 00:03:48.465 06:34:02 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:48.465 06:34:02 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:48.465 06:34:02 -- common/autotest_common.sh@1470 -- # uname 00:03:48.465 06:34:02 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:48.465 06:34:02 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:48.465 06:34:02 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:48.466 lcov: LCOV version 1.15 00:03:48.466 06:34:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:56.586 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:56.586 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:56.586 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:56.586 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:56.586 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:56.586 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:14.672 06:34:27 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:14.672 06:34:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.672 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 06:34:27 -- spdk/autotest.sh@89 -- # rm -f 00:04:14.672 06:34:27 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.672 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:14.672 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:14.672 06:34:28 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:14.672 06:34:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:14.672 06:34:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:14.672 06:34:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:14.672 06:34:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.672 06:34:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:14.672 06:34:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:14.672 06:34:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.672 06:34:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:14.672 06:34:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:14.672 06:34:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.672 06:34:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:14.672 06:34:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:14.672 06:34:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.672 06:34:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:14.672 06:34:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:14.672 06:34:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:14.672 06:34:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.672 06:34:28 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:14.672 06:34:28 -- spdk/autotest.sh@108 -- # grep -v p 00:04:14.672 06:34:28 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:14.672 06:34:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:14.672 06:34:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:14.672 06:34:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:14.672 06:34:28 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:14.672 06:34:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.672 No valid GPT data, bailing 00:04:14.672 06:34:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.672 06:34:28 -- scripts/common.sh@393 -- # pt= 00:04:14.672 06:34:28 -- scripts/common.sh@394 -- # return 1 00:04:14.672 06:34:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.672 1+0 records in 00:04:14.672 1+0 records out 00:04:14.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503046 s, 208 MB/s 00:04:14.672 06:34:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:14.672 06:34:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:14.673 06:34:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:14.673 06:34:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:14.673 06:34:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.673 No valid GPT data, bailing 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # pt= 00:04:14.673 06:34:28 -- scripts/common.sh@394 -- # return 1 00:04:14.673 06:34:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.673 1+0 records in 00:04:14.673 1+0 records out 00:04:14.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369799 s, 284 MB/s 00:04:14.673 06:34:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:14.673 06:34:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:14.673 06:34:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:14.673 06:34:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:14.673 06:34:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:14.673 No valid GPT data, bailing 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # pt= 00:04:14.673 06:34:28 -- scripts/common.sh@394 -- # return 1 00:04:14.673 06:34:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:14.673 1+0 records in 00:04:14.673 1+0 records out 00:04:14.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483029 s, 217 MB/s 00:04:14.673 06:34:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:14.673 06:34:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:14.673 06:34:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:14.673 06:34:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:14.673 06:34:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:14.673 No valid GPT data, bailing 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:14.673 06:34:28 -- scripts/common.sh@393 -- # pt= 00:04:14.673 06:34:28 -- scripts/common.sh@394 -- # return 1 00:04:14.673 06:34:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:14.673 1+0 records in 00:04:14.673 1+0 records out 00:04:14.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004487 s, 234 MB/s 00:04:14.673 06:34:28 -- spdk/autotest.sh@116 -- # sync 00:04:14.673 06:34:28 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.673 06:34:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.673 06:34:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.575 06:34:30 -- spdk/autotest.sh@122 -- # uname -s 00:04:16.575 06:34:30 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:16.575 06:34:30 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.575 06:34:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.575 06:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.575 06:34:30 -- common/autotest_common.sh@10 -- # set +x 00:04:16.575 ************************************ 00:04:16.575 START TEST setup.sh 00:04:16.575 ************************************ 00:04:16.575 06:34:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.575 * Looking for test storage... 00:04:16.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.575 06:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:16.575 06:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:16.575 06:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:16.575 06:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:16.575 06:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:16.575 06:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:16.575 06:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:16.575 06:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:16.575 06:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:16.575 06:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.575 06:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:16.575 06:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:16.575 06:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:16.575 06:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:16.575 06:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:16.575 06:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:16.575 06:34:30 -- scripts/common.sh@344 -- # : 1 00:04:16.575 06:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:16.575 06:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.575 06:34:30 -- scripts/common.sh@364 -- # decimal 1 00:04:16.575 06:34:30 -- scripts/common.sh@352 -- # local d=1 00:04:16.575 06:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.575 06:34:30 -- scripts/common.sh@354 -- # echo 1 00:04:16.575 06:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:16.575 06:34:30 -- scripts/common.sh@365 -- # decimal 2 00:04:16.575 06:34:30 -- scripts/common.sh@352 -- # local d=2 00:04:16.575 06:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.575 06:34:30 -- scripts/common.sh@354 -- # echo 2 00:04:16.575 06:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:16.575 06:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:16.575 06:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:16.575 06:34:30 -- scripts/common.sh@367 -- # return 0 00:04:16.575 06:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.575 06:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:16.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.575 --rc genhtml_branch_coverage=1 00:04:16.575 --rc genhtml_function_coverage=1 00:04:16.576 --rc genhtml_legend=1 00:04:16.576 --rc geninfo_all_blocks=1 00:04:16.576 --rc geninfo_unexecuted_blocks=1 00:04:16.576 00:04:16.576 ' 00:04:16.576 06:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:16.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.576 --rc genhtml_branch_coverage=1 00:04:16.576 --rc genhtml_function_coverage=1 00:04:16.576 --rc genhtml_legend=1 00:04:16.576 --rc geninfo_all_blocks=1 00:04:16.576 --rc geninfo_unexecuted_blocks=1 00:04:16.576 00:04:16.576 ' 00:04:16.576 06:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:16.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.576 --rc genhtml_branch_coverage=1 00:04:16.576 --rc genhtml_function_coverage=1 00:04:16.576 --rc genhtml_legend=1 00:04:16.576 --rc geninfo_all_blocks=1 00:04:16.576 --rc geninfo_unexecuted_blocks=1 00:04:16.576 00:04:16.576 ' 00:04:16.576 06:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:16.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.576 --rc genhtml_branch_coverage=1 00:04:16.576 --rc genhtml_function_coverage=1 00:04:16.576 --rc genhtml_legend=1 00:04:16.576 --rc geninfo_all_blocks=1 00:04:16.576 --rc geninfo_unexecuted_blocks=1 00:04:16.576 00:04:16.576 ' 00:04:16.576 06:34:30 -- setup/test-setup.sh@10 -- # uname -s 00:04:16.576 06:34:30 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:16.576 06:34:30 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.576 06:34:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.576 06:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.576 06:34:30 -- common/autotest_common.sh@10 -- # set +x 00:04:16.576 ************************************ 00:04:16.576 START TEST acl 00:04:16.576 ************************************ 00:04:16.576 06:34:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.835 * Looking for test storage... 00:04:16.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.835 06:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:16.835 06:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:16.835 06:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:16.835 06:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:16.835 06:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:16.835 06:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:16.835 06:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:16.835 06:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:16.835 06:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:16.835 06:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.835 06:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:16.835 06:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:16.835 06:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:16.835 06:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:16.835 06:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:16.835 06:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:16.835 06:34:30 -- scripts/common.sh@344 -- # : 1 00:04:16.835 06:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:16.835 06:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.835 06:34:30 -- scripts/common.sh@364 -- # decimal 1 00:04:16.835 06:34:30 -- scripts/common.sh@352 -- # local d=1 00:04:16.835 06:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.835 06:34:30 -- scripts/common.sh@354 -- # echo 1 00:04:16.835 06:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:16.835 06:34:30 -- scripts/common.sh@365 -- # decimal 2 00:04:16.835 06:34:30 -- scripts/common.sh@352 -- # local d=2 00:04:16.835 06:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.835 06:34:30 -- scripts/common.sh@354 -- # echo 2 00:04:16.835 06:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:16.835 06:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:16.835 06:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:16.835 06:34:30 -- scripts/common.sh@367 -- # return 0 00:04:16.835 06:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.835 06:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.835 --rc genhtml_branch_coverage=1 00:04:16.835 --rc genhtml_function_coverage=1 00:04:16.835 --rc genhtml_legend=1 00:04:16.835 --rc geninfo_all_blocks=1 00:04:16.835 --rc geninfo_unexecuted_blocks=1 00:04:16.835 00:04:16.835 ' 00:04:16.835 06:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.835 --rc genhtml_branch_coverage=1 00:04:16.835 --rc genhtml_function_coverage=1 00:04:16.835 --rc genhtml_legend=1 00:04:16.835 --rc geninfo_all_blocks=1 00:04:16.835 --rc geninfo_unexecuted_blocks=1 00:04:16.835 00:04:16.835 ' 00:04:16.835 06:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.835 --rc genhtml_branch_coverage=1 00:04:16.835 --rc genhtml_function_coverage=1 00:04:16.835 --rc genhtml_legend=1 00:04:16.835 --rc geninfo_all_blocks=1 00:04:16.835 --rc geninfo_unexecuted_blocks=1 00:04:16.835 00:04:16.835 ' 00:04:16.835 06:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.835 --rc genhtml_branch_coverage=1 00:04:16.835 --rc genhtml_function_coverage=1 00:04:16.835 --rc genhtml_legend=1 00:04:16.835 --rc geninfo_all_blocks=1 00:04:16.835 --rc geninfo_unexecuted_blocks=1 00:04:16.835 00:04:16.835 ' 00:04:16.835 06:34:30 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:16.835 06:34:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:16.835 06:34:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:16.835 06:34:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:16.835 06:34:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:16.835 06:34:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:16.836 06:34:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:16.836 06:34:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:16.836 06:34:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:16.836 06:34:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:16.836 06:34:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:16.836 06:34:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:16.836 06:34:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:16.836 06:34:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:16.836 06:34:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:16.836 06:34:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:16.836 06:34:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.836 06:34:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:16.836 06:34:30 -- setup/acl.sh@12 -- # devs=() 00:04:16.836 06:34:30 -- setup/acl.sh@12 -- # declare -a devs 00:04:16.836 06:34:30 -- setup/acl.sh@13 -- # drivers=() 00:04:16.836 06:34:30 -- setup/acl.sh@13 -- # declare -A drivers 00:04:16.836 06:34:30 -- setup/acl.sh@51 -- # setup reset 00:04:16.836 06:34:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.836 06:34:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.773 06:34:31 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:17.773 06:34:31 -- setup/acl.sh@16 -- # local dev driver 00:04:17.773 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.773 06:34:31 -- setup/acl.sh@15 -- # setup output status 00:04:17.773 06:34:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.773 06:34:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.773 Hugepages 00:04:17.773 node hugesize free / total 00:04:17.773 06:34:31 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:17.773 06:34:31 -- setup/acl.sh@19 -- # continue 00:04:17.773 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.773 00:04:17.773 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.773 06:34:31 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:17.773 06:34:31 -- setup/acl.sh@19 -- # continue 00:04:17.773 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.773 06:34:31 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:17.773 06:34:31 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:17.773 06:34:31 -- setup/acl.sh@20 -- # continue 00:04:17.773 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.034 06:34:31 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:18.034 06:34:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.034 06:34:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:18.034 06:34:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.034 06:34:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.034 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.034 06:34:31 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:18.034 06:34:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.034 06:34:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:18.034 06:34:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.034 06:34:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.034 06:34:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.034 06:34:31 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:18.034 06:34:31 -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.034 06:34:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.034 06:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.034 06:34:31 -- common/autotest_common.sh@10 -- # set +x 00:04:18.034 ************************************ 00:04:18.034 START TEST denied 00:04:18.034 ************************************ 00:04:18.034 06:34:31 -- common/autotest_common.sh@1114 -- # denied 00:04:18.034 06:34:31 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:18.035 06:34:31 -- setup/acl.sh@38 -- # setup output config 00:04:18.035 06:34:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.035 06:34:31 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:18.035 06:34:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.970 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:18.970 06:34:32 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:18.970 06:34:32 -- setup/acl.sh@28 -- # local dev driver 00:04:18.970 06:34:32 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:18.970 06:34:32 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:18.970 06:34:32 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:18.970 06:34:32 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:18.970 06:34:32 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:18.970 06:34:32 -- setup/acl.sh@41 -- # setup reset 00:04:18.970 06:34:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.970 06:34:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.539 00:04:19.539 real 0m1.466s 00:04:19.539 user 0m0.624s 00:04:19.539 sys 0m0.813s 00:04:19.539 ************************************ 00:04:19.539 END TEST denied 00:04:19.539 ************************************ 00:04:19.539 06:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.539 06:34:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.539 06:34:33 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:19.539 06:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.539 06:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.539 06:34:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.539 ************************************ 00:04:19.539 START TEST allowed 00:04:19.539 ************************************ 00:04:19.539 06:34:33 -- common/autotest_common.sh@1114 -- # allowed 00:04:19.539 06:34:33 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.539 06:34:33 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:19.539 06:34:33 -- setup/acl.sh@45 -- # setup output config 00:04:19.539 06:34:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.539 06:34:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.474 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.474 06:34:34 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:20.474 06:34:34 -- setup/acl.sh@28 -- # local dev driver 00:04:20.474 06:34:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.474 06:34:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:20.474 06:34:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:20.474 06:34:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.474 06:34:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.474 06:34:34 -- setup/acl.sh@48 -- # setup reset 00:04:20.474 06:34:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.474 06:34:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.041 00:04:21.041 real 0m1.538s 00:04:21.041 user 0m0.677s 00:04:21.041 sys 0m0.861s 00:04:21.041 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.041 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.041 ************************************ 00:04:21.041 END TEST allowed 00:04:21.041 ************************************ 00:04:21.041 00:04:21.041 real 0m4.431s 00:04:21.041 user 0m1.995s 00:04:21.041 sys 0m2.437s 00:04:21.041 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.041 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.041 ************************************ 00:04:21.041 END TEST acl 00:04:21.041 ************************************ 00:04:21.041 06:34:35 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.041 06:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.041 06:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.041 06:34:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.299 ************************************ 00:04:21.299 START TEST hugepages 00:04:21.299 ************************************ 00:04:21.299 06:34:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.299 * Looking for test storage... 00:04:21.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.299 06:34:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:21.299 06:34:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:21.299 06:34:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:21.299 06:34:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:21.299 06:34:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:21.299 06:34:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:21.299 06:34:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:21.299 06:34:35 -- scripts/common.sh@335 -- # IFS=.-: 00:04:21.299 06:34:35 -- scripts/common.sh@335 -- # read -ra ver1 00:04:21.299 06:34:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.299 06:34:35 -- scripts/common.sh@336 -- # read -ra ver2 00:04:21.299 06:34:35 -- scripts/common.sh@337 -- # local 'op=<' 00:04:21.299 06:34:35 -- scripts/common.sh@339 -- # ver1_l=2 00:04:21.299 06:34:35 -- scripts/common.sh@340 -- # ver2_l=1 00:04:21.299 06:34:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:21.299 06:34:35 -- scripts/common.sh@343 -- # case "$op" in 00:04:21.299 06:34:35 -- scripts/common.sh@344 -- # : 1 00:04:21.299 06:34:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:21.299 06:34:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.299 06:34:35 -- scripts/common.sh@364 -- # decimal 1 00:04:21.299 06:34:35 -- scripts/common.sh@352 -- # local d=1 00:04:21.299 06:34:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.299 06:34:35 -- scripts/common.sh@354 -- # echo 1 00:04:21.299 06:34:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:21.299 06:34:35 -- scripts/common.sh@365 -- # decimal 2 00:04:21.299 06:34:35 -- scripts/common.sh@352 -- # local d=2 00:04:21.299 06:34:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.299 06:34:35 -- scripts/common.sh@354 -- # echo 2 00:04:21.299 06:34:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:21.299 06:34:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:21.299 06:34:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:21.299 06:34:35 -- scripts/common.sh@367 -- # return 0 00:04:21.299 06:34:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.299 06:34:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.299 --rc genhtml_branch_coverage=1 00:04:21.299 --rc genhtml_function_coverage=1 00:04:21.299 --rc genhtml_legend=1 00:04:21.299 --rc geninfo_all_blocks=1 00:04:21.299 --rc geninfo_unexecuted_blocks=1 00:04:21.299 00:04:21.299 ' 00:04:21.299 06:34:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.299 --rc genhtml_branch_coverage=1 00:04:21.299 --rc genhtml_function_coverage=1 00:04:21.299 --rc genhtml_legend=1 00:04:21.299 --rc geninfo_all_blocks=1 00:04:21.299 --rc geninfo_unexecuted_blocks=1 00:04:21.299 00:04:21.299 ' 00:04:21.299 06:34:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.299 --rc genhtml_branch_coverage=1 00:04:21.299 --rc genhtml_function_coverage=1 00:04:21.299 --rc genhtml_legend=1 00:04:21.299 --rc geninfo_all_blocks=1 00:04:21.299 --rc geninfo_unexecuted_blocks=1 00:04:21.299 00:04:21.299 ' 00:04:21.299 06:34:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.299 --rc genhtml_branch_coverage=1 00:04:21.299 --rc genhtml_function_coverage=1 00:04:21.299 --rc genhtml_legend=1 00:04:21.299 --rc geninfo_all_blocks=1 00:04:21.299 --rc geninfo_unexecuted_blocks=1 00:04:21.299 00:04:21.299 ' 00:04:21.299 06:34:35 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.299 06:34:35 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.299 06:34:35 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.299 06:34:35 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.299 06:34:35 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.299 06:34:35 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.299 06:34:35 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.299 06:34:35 -- setup/common.sh@18 -- # local node= 00:04:21.299 06:34:35 -- setup/common.sh@19 -- # local var val 00:04:21.299 06:34:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.299 06:34:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.299 06:34:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.299 06:34:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.299 06:34:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.299 06:34:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.299 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.299 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.299 06:34:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 5829496 kB' 'MemAvailable: 7341828 kB' 'Buffers: 3704 kB' 'Cached: 1722028 kB' 'SwapCached: 0 kB' 'Active: 496896 kB' 'Inactive: 1346036 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 118648 kB' 'Mapped: 50996 kB' 'Shmem: 10508 kB' 'KReclaimable: 68108 kB' 'Slab: 163752 kB' 'SReclaimable: 68108 kB' 'SUnreclaim: 95644 kB' 'KernelStack: 6544 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:21.299 06:34:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.299 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.299 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.299 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.299 06:34:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.299 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # continue 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.300 06:34:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.300 06:34:35 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.300 06:34:35 -- setup/common.sh@33 -- # echo 2048 00:04:21.300 06:34:35 -- setup/common.sh@33 -- # return 0 00:04:21.300 06:34:35 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.300 06:34:35 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.300 06:34:35 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.300 06:34:35 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.300 06:34:35 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.300 06:34:35 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.300 06:34:35 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.300 06:34:35 -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.300 06:34:35 -- setup/hugepages.sh@27 -- # local node 00:04:21.300 06:34:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.300 06:34:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.300 06:34:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.300 06:34:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.300 06:34:35 -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.300 06:34:35 -- setup/hugepages.sh@37 -- # local node hp 00:04:21.300 06:34:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.300 06:34:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.300 06:34:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.300 06:34:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.300 06:34:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.300 06:34:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.300 06:34:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.300 06:34:35 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.300 06:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.300 06:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.300 06:34:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.300 ************************************ 00:04:21.300 START TEST default_setup 00:04:21.300 ************************************ 00:04:21.300 06:34:35 -- common/autotest_common.sh@1114 -- # default_setup 00:04:21.300 06:34:35 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.300 06:34:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.300 06:34:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.300 06:34:35 -- setup/hugepages.sh@51 -- # shift 00:04:21.300 06:34:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.300 06:34:35 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.300 06:34:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.300 06:34:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.300 06:34:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.300 06:34:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.300 06:34:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.300 06:34:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.300 06:34:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.300 06:34:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.300 06:34:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.300 06:34:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.301 06:34:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.301 06:34:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.301 06:34:35 -- setup/hugepages.sh@73 -- # return 0 00:04:21.301 06:34:35 -- setup/hugepages.sh@137 -- # setup output 00:04:21.301 06:34:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.301 06:34:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.238 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.238 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.238 06:34:36 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.238 06:34:36 -- setup/hugepages.sh@89 -- # local node 00:04:22.238 06:34:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.238 06:34:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.238 06:34:36 -- setup/hugepages.sh@92 -- # local surp 00:04:22.238 06:34:36 -- setup/hugepages.sh@93 -- # local resv 00:04:22.238 06:34:36 -- setup/hugepages.sh@94 -- # local anon 00:04:22.238 06:34:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.238 06:34:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.238 06:34:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.238 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:22.238 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.238 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.238 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.238 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.238 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.238 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.238 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934844 kB' 'MemAvailable: 9447000 kB' 'Buffers: 3704 kB' 'Cached: 1722016 kB' 'SwapCached: 0 kB' 'Active: 498348 kB' 'Inactive: 1346044 kB' 'Active(anon): 129160 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120232 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163520 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95776 kB' 'KernelStack: 6496 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.238 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.238 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.239 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.239 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.239 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:22.239 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:22.239 06:34:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.239 06:34:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.239 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.239 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:22.239 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.239 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.239 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.239 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.239 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.239 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.239 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.240 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934592 kB' 'MemAvailable: 9446748 kB' 'Buffers: 3704 kB' 'Cached: 1722016 kB' 'SwapCached: 0 kB' 'Active: 498060 kB' 'Inactive: 1346044 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163520 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95776 kB' 'KernelStack: 6496 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.240 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.240 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.241 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:22.241 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:22.241 06:34:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.241 06:34:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.241 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.241 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:22.241 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.241 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.241 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.241 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.241 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.241 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.241 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934592 kB' 'MemAvailable: 9446752 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498032 kB' 'Inactive: 1346048 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163520 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95776 kB' 'KernelStack: 6528 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.241 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.241 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.242 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.242 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.503 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:22.503 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:22.503 06:34:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.503 nr_hugepages=1024 00:04:22.503 06:34:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.503 resv_hugepages=0 00:04:22.503 06:34:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.503 surplus_hugepages=0 00:04:22.503 06:34:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.503 anon_hugepages=0 00:04:22.503 06:34:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.503 06:34:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.503 06:34:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.503 06:34:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.503 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.503 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:22.503 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.503 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.503 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.503 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.503 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.503 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.503 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934592 kB' 'MemAvailable: 9446752 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497956 kB' 'Inactive: 1346048 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163504 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95760 kB' 'KernelStack: 6480 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.503 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.503 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.504 06:34:36 -- setup/common.sh@33 -- # echo 1024 00:04:22.504 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:22.504 06:34:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.504 06:34:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.504 06:34:36 -- setup/hugepages.sh@27 -- # local node 00:04:22.504 06:34:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.504 06:34:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.504 06:34:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.504 06:34:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.504 06:34:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.504 06:34:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.504 06:34:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.504 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.504 06:34:36 -- setup/common.sh@18 -- # local node=0 00:04:22.504 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.504 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.504 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.504 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.504 06:34:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.504 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.504 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934592 kB' 'MemUsed: 4304520 kB' 'SwapCached: 0 kB' 'Active: 497996 kB' 'Inactive: 1346048 kB' 'Active(anon): 128808 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1725724 kB' 'Mapped: 50788 kB' 'AnonPages: 119980 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163504 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.504 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.504 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.505 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.505 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.505 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:22.505 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:22.505 06:34:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.505 06:34:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.505 06:34:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.505 06:34:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.505 node0=1024 expecting 1024 00:04:22.505 06:34:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.505 06:34:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.505 00:04:22.505 real 0m1.005s 00:04:22.505 user 0m0.451s 00:04:22.505 sys 0m0.514s 00:04:22.505 06:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.505 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.505 ************************************ 00:04:22.505 END TEST default_setup 00:04:22.505 ************************************ 00:04:22.505 06:34:36 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.505 06:34:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.505 06:34:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.505 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.505 ************************************ 00:04:22.505 START TEST per_node_1G_alloc 00:04:22.505 ************************************ 00:04:22.505 06:34:36 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:22.505 06:34:36 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.505 06:34:36 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:22.505 06:34:36 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.505 06:34:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.505 06:34:36 -- setup/hugepages.sh@51 -- # shift 00:04:22.505 06:34:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.505 06:34:36 -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.505 06:34:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.506 06:34:36 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.506 06:34:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.506 06:34:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.506 06:34:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.506 06:34:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.506 06:34:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.506 06:34:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.506 06:34:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.506 06:34:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.506 06:34:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.506 06:34:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.506 06:34:36 -- setup/hugepages.sh@73 -- # return 0 00:04:22.506 06:34:36 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.506 06:34:36 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:22.506 06:34:36 -- setup/hugepages.sh@146 -- # setup output 00:04:22.506 06:34:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.506 06:34:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.765 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.765 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.765 06:34:36 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:22.765 06:34:36 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:22.765 06:34:36 -- setup/hugepages.sh@89 -- # local node 00:04:22.765 06:34:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.765 06:34:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.765 06:34:36 -- setup/hugepages.sh@92 -- # local surp 00:04:22.765 06:34:36 -- setup/hugepages.sh@93 -- # local resv 00:04:22.765 06:34:36 -- setup/hugepages.sh@94 -- # local anon 00:04:22.765 06:34:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.765 06:34:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.765 06:34:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.765 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:22.765 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:22.765 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.765 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.765 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.765 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.765 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.765 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8981432 kB' 'MemAvailable: 10493596 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498860 kB' 'Inactive: 1346052 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120496 kB' 'Mapped: 51016 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163512 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95768 kB' 'KernelStack: 6456 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.765 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.765 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.781 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.781 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.043 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.043 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.044 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:23.044 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:23.044 06:34:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.044 06:34:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.044 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.044 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:23.044 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:23.044 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.044 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.044 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.044 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.044 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.044 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8981684 kB' 'MemAvailable: 10493848 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498096 kB' 'Inactive: 1346052 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163528 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95784 kB' 'KernelStack: 6496 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.044 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.044 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.045 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:23.045 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:23.045 06:34:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.045 06:34:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.045 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.045 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:23.045 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:23.045 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.045 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.045 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.045 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.045 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.045 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8981684 kB' 'MemAvailable: 10493848 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498212 kB' 'Inactive: 1346052 kB' 'Active(anon): 129024 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163504 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95760 kB' 'KernelStack: 6480 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.045 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.045 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.046 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.046 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.047 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:23.047 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:23.047 06:34:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.047 nr_hugepages=512 00:04:23.047 resv_hugepages=0 00:04:23.047 surplus_hugepages=0 00:04:23.047 anon_hugepages=0 00:04:23.047 06:34:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.047 06:34:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.047 06:34:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.047 06:34:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.047 06:34:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.047 06:34:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.047 06:34:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.047 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.047 06:34:36 -- setup/common.sh@18 -- # local node= 00:04:23.047 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:23.047 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.047 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.047 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.047 06:34:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.047 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.047 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8981684 kB' 'MemAvailable: 10493848 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497976 kB' 'Inactive: 1346052 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163496 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95752 kB' 'KernelStack: 6480 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.047 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.047 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.048 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.048 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.048 06:34:36 -- setup/common.sh@33 -- # echo 512 00:04:23.048 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:23.048 06:34:36 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.048 06:34:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.048 06:34:36 -- setup/hugepages.sh@27 -- # local node 00:04:23.048 06:34:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.048 06:34:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.048 06:34:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.048 06:34:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.048 06:34:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.048 06:34:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.048 06:34:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.048 06:34:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.048 06:34:36 -- setup/common.sh@18 -- # local node=0 00:04:23.048 06:34:36 -- setup/common.sh@19 -- # local var val 00:04:23.048 06:34:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.049 06:34:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.049 06:34:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.049 06:34:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.049 06:34:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.049 06:34:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8982124 kB' 'MemUsed: 3256988 kB' 'SwapCached: 0 kB' 'Active: 498324 kB' 'Inactive: 1346052 kB' 'Active(anon): 129136 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1725724 kB' 'Mapped: 50804 kB' 'AnonPages: 120428 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163508 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # continue 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.049 06:34:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.049 06:34:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.049 06:34:36 -- setup/common.sh@33 -- # echo 0 00:04:23.049 06:34:36 -- setup/common.sh@33 -- # return 0 00:04:23.049 06:34:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.049 06:34:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.050 06:34:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.050 06:34:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.050 node0=512 expecting 512 00:04:23.050 ************************************ 00:04:23.050 END TEST per_node_1G_alloc 00:04:23.050 ************************************ 00:04:23.050 06:34:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.050 06:34:36 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.050 00:04:23.050 real 0m0.582s 00:04:23.050 user 0m0.287s 00:04:23.050 sys 0m0.302s 00:04:23.050 06:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.050 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:23.050 06:34:36 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.050 06:34:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.050 06:34:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.050 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:23.050 ************************************ 00:04:23.050 START TEST even_2G_alloc 00:04:23.050 ************************************ 00:04:23.050 06:34:36 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:23.050 06:34:36 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.050 06:34:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.050 06:34:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.050 06:34:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.050 06:34:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.050 06:34:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.050 06:34:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.050 06:34:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.050 06:34:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.050 06:34:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.050 06:34:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:23.050 06:34:36 -- setup/hugepages.sh@83 -- # : 0 00:04:23.050 06:34:36 -- setup/hugepages.sh@84 -- # : 0 00:04:23.050 06:34:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.050 06:34:36 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.050 06:34:36 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.050 06:34:36 -- setup/hugepages.sh@153 -- # setup output 00:04:23.050 06:34:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.050 06:34:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.621 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.621 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.621 06:34:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:23.621 06:34:37 -- setup/hugepages.sh@89 -- # local node 00:04:23.621 06:34:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.621 06:34:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.621 06:34:37 -- setup/hugepages.sh@92 -- # local surp 00:04:23.621 06:34:37 -- setup/hugepages.sh@93 -- # local resv 00:04:23.621 06:34:37 -- setup/hugepages.sh@94 -- # local anon 00:04:23.621 06:34:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.621 06:34:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.621 06:34:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.621 06:34:37 -- setup/common.sh@18 -- # local node= 00:04:23.621 06:34:37 -- setup/common.sh@19 -- # local var val 00:04:23.621 06:34:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.621 06:34:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.622 06:34:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.622 06:34:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.622 06:34:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.622 06:34:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7935712 kB' 'MemAvailable: 9447876 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497888 kB' 'Inactive: 1346052 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163528 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95784 kB' 'KernelStack: 6472 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.622 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.622 06:34:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.623 06:34:37 -- setup/common.sh@33 -- # echo 0 00:04:23.623 06:34:37 -- setup/common.sh@33 -- # return 0 00:04:23.623 06:34:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.623 06:34:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.623 06:34:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.623 06:34:37 -- setup/common.sh@18 -- # local node= 00:04:23.623 06:34:37 -- setup/common.sh@19 -- # local var val 00:04:23.623 06:34:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.623 06:34:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.623 06:34:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.623 06:34:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.623 06:34:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.623 06:34:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7937564 kB' 'MemAvailable: 9449728 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498052 kB' 'Inactive: 1346052 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163520 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95776 kB' 'KernelStack: 6456 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.623 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.623 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.624 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.624 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.625 06:34:37 -- setup/common.sh@33 -- # echo 0 00:04:23.625 06:34:37 -- setup/common.sh@33 -- # return 0 00:04:23.625 06:34:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.625 06:34:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.625 06:34:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.625 06:34:37 -- setup/common.sh@18 -- # local node= 00:04:23.625 06:34:37 -- setup/common.sh@19 -- # local var val 00:04:23.625 06:34:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.625 06:34:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.625 06:34:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.625 06:34:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.625 06:34:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.625 06:34:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7937564 kB' 'MemAvailable: 9449728 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498032 kB' 'Inactive: 1346052 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163536 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95792 kB' 'KernelStack: 6464 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.625 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.625 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.626 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.626 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.626 06:34:37 -- setup/common.sh@33 -- # echo 0 00:04:23.626 06:34:37 -- setup/common.sh@33 -- # return 0 00:04:23.626 06:34:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.626 06:34:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.626 nr_hugepages=1024 00:04:23.626 06:34:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.626 resv_hugepages=0 00:04:23.626 06:34:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.626 surplus_hugepages=0 00:04:23.626 06:34:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.626 anon_hugepages=0 00:04:23.627 06:34:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.627 06:34:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.627 06:34:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.627 06:34:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.627 06:34:37 -- setup/common.sh@18 -- # local node= 00:04:23.627 06:34:37 -- setup/common.sh@19 -- # local var val 00:04:23.627 06:34:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.627 06:34:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.627 06:34:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.627 06:34:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.627 06:34:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.627 06:34:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7937564 kB' 'MemAvailable: 9449728 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498236 kB' 'Inactive: 1346052 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119936 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163532 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95788 kB' 'KernelStack: 6464 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.627 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.627 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.628 06:34:37 -- setup/common.sh@33 -- # echo 1024 00:04:23.628 06:34:37 -- setup/common.sh@33 -- # return 0 00:04:23.628 06:34:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.628 06:34:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.628 06:34:37 -- setup/hugepages.sh@27 -- # local node 00:04:23.628 06:34:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.628 06:34:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.628 06:34:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.628 06:34:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.628 06:34:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.628 06:34:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.628 06:34:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.628 06:34:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.628 06:34:37 -- setup/common.sh@18 -- # local node=0 00:04:23.628 06:34:37 -- setup/common.sh@19 -- # local var val 00:04:23.628 06:34:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.628 06:34:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.628 06:34:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.628 06:34:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.628 06:34:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.628 06:34:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7937816 kB' 'MemUsed: 4301296 kB' 'SwapCached: 0 kB' 'Active: 497992 kB' 'Inactive: 1346052 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1725724 kB' 'Mapped: 50788 kB' 'AnonPages: 119896 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163532 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.628 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.628 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # continue 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.629 06:34:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.629 06:34:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.629 06:34:37 -- setup/common.sh@33 -- # echo 0 00:04:23.629 06:34:37 -- setup/common.sh@33 -- # return 0 00:04:23.629 06:34:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.629 06:34:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.629 06:34:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.629 06:34:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.629 06:34:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.629 node0=1024 expecting 1024 00:04:23.629 06:34:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.629 00:04:23.629 real 0m0.580s 00:04:23.629 user 0m0.275s 00:04:23.629 sys 0m0.304s 00:04:23.629 06:34:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.629 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.629 ************************************ 00:04:23.629 END TEST even_2G_alloc 00:04:23.629 ************************************ 00:04:23.629 06:34:37 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:23.629 06:34:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.629 06:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.629 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.888 ************************************ 00:04:23.888 START TEST odd_alloc 00:04:23.888 ************************************ 00:04:23.888 06:34:37 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:23.888 06:34:37 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:23.888 06:34:37 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:23.888 06:34:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:23.888 06:34:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.888 06:34:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.888 06:34:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.888 06:34:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:23.888 06:34:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.888 06:34:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.888 06:34:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.888 06:34:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:23.888 06:34:37 -- setup/hugepages.sh@83 -- # : 0 00:04:23.888 06:34:37 -- setup/hugepages.sh@84 -- # : 0 00:04:23.888 06:34:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.888 06:34:37 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:23.888 06:34:37 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:23.888 06:34:37 -- setup/hugepages.sh@160 -- # setup output 00:04:23.888 06:34:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.888 06:34:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.149 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.149 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.149 06:34:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.149 06:34:38 -- setup/hugepages.sh@89 -- # local node 00:04:24.149 06:34:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.149 06:34:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.149 06:34:38 -- setup/hugepages.sh@92 -- # local surp 00:04:24.149 06:34:38 -- setup/hugepages.sh@93 -- # local resv 00:04:24.149 06:34:38 -- setup/hugepages.sh@94 -- # local anon 00:04:24.149 06:34:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.149 06:34:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.149 06:34:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.149 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.149 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.149 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.149 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.149 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.149 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.149 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.149 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7934496 kB' 'MemAvailable: 9446660 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498340 kB' 'Inactive: 1346052 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120204 kB' 'Mapped: 50900 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163512 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95768 kB' 'KernelStack: 6456 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.149 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.149 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.150 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.150 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.150 06:34:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.150 06:34:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.150 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.150 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.150 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.150 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.150 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.150 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.150 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.150 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.150 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7935132 kB' 'MemAvailable: 9447296 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498056 kB' 'Inactive: 1346052 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163484 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95740 kB' 'KernelStack: 6496 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.150 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.150 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.151 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.151 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.151 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.151 06:34:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.151 06:34:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.151 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.151 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.151 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.151 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.151 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.151 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.151 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.151 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.151 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.151 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7935132 kB' 'MemAvailable: 9447296 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498068 kB' 'Inactive: 1346052 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163484 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95740 kB' 'KernelStack: 6480 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.152 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.152 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.153 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.153 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.153 nr_hugepages=1025 00:04:24.153 resv_hugepages=0 00:04:24.153 surplus_hugepages=0 00:04:24.153 06:34:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.153 06:34:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.153 06:34:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.153 06:34:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.153 anon_hugepages=0 00:04:24.153 06:34:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.153 06:34:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.153 06:34:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.153 06:34:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.153 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.153 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.153 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.153 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.153 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.153 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.153 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.153 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.153 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7935568 kB' 'MemAvailable: 9447732 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498096 kB' 'Inactive: 1346052 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163484 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95740 kB' 'KernelStack: 6496 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.153 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.153 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.154 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.154 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.413 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.414 06:34:38 -- setup/common.sh@33 -- # echo 1025 00:04:24.414 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.414 06:34:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.414 06:34:38 -- setup/hugepages.sh@27 -- # local node 00:04:24.414 06:34:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.414 06:34:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.414 06:34:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.414 06:34:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.414 06:34:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.414 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.414 06:34:38 -- setup/common.sh@18 -- # local node=0 00:04:24.414 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.414 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.414 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.414 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.414 06:34:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.414 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.414 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7935568 kB' 'MemUsed: 4303544 kB' 'SwapCached: 0 kB' 'Active: 498072 kB' 'Inactive: 1346052 kB' 'Active(anon): 128884 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1725724 kB' 'Mapped: 50788 kB' 'AnonPages: 120020 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163476 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.414 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.414 06:34:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.414 06:34:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.414 06:34:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.414 06:34:38 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.414 node0=1025 expecting 1025 00:04:24.414 06:34:38 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.414 ************************************ 00:04:24.414 END TEST odd_alloc 00:04:24.414 ************************************ 00:04:24.414 00:04:24.414 real 0m0.580s 00:04:24.414 user 0m0.289s 00:04:24.414 sys 0m0.296s 00:04:24.414 06:34:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.414 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.414 06:34:38 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.414 06:34:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.414 06:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.414 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.414 ************************************ 00:04:24.414 START TEST custom_alloc 00:04:24.414 ************************************ 00:04:24.414 06:34:38 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:24.414 06:34:38 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.414 06:34:38 -- setup/hugepages.sh@169 -- # local node 00:04:24.414 06:34:38 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.414 06:34:38 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.414 06:34:38 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.414 06:34:38 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.414 06:34:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.414 06:34:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.414 06:34:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.414 06:34:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.414 06:34:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.414 06:34:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.414 06:34:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.414 06:34:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.414 06:34:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.414 06:34:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.414 06:34:38 -- setup/hugepages.sh@83 -- # : 0 00:04:24.414 06:34:38 -- setup/hugepages.sh@84 -- # : 0 00:04:24.414 06:34:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.414 06:34:38 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.414 06:34:38 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.415 06:34:38 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.415 06:34:38 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.415 06:34:38 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.415 06:34:38 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.415 06:34:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.415 06:34:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.415 06:34:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.415 06:34:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.415 06:34:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.415 06:34:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.415 06:34:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.415 06:34:38 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.415 06:34:38 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.415 06:34:38 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.415 06:34:38 -- setup/hugepages.sh@78 -- # return 0 00:04:24.415 06:34:38 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.415 06:34:38 -- setup/hugepages.sh@187 -- # setup output 00:04:24.415 06:34:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.415 06:34:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.674 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.674 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.674 06:34:38 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:24.674 06:34:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.674 06:34:38 -- setup/hugepages.sh@89 -- # local node 00:04:24.674 06:34:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.674 06:34:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.674 06:34:38 -- setup/hugepages.sh@92 -- # local surp 00:04:24.674 06:34:38 -- setup/hugepages.sh@93 -- # local resv 00:04:24.674 06:34:38 -- setup/hugepages.sh@94 -- # local anon 00:04:24.674 06:34:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.674 06:34:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.674 06:34:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.674 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.674 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.674 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.674 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.674 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.674 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.674 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.674 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.674 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8989236 kB' 'MemAvailable: 10501400 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498156 kB' 'Inactive: 1346052 kB' 'Active(anon): 128968 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120124 kB' 'Mapped: 50940 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163460 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95716 kB' 'KernelStack: 6456 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.675 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.675 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.676 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.676 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.676 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.676 06:34:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.676 06:34:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.676 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.676 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.676 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.676 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.676 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.676 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.676 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.676 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.676 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.676 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8989236 kB' 'MemAvailable: 10501400 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497872 kB' 'Inactive: 1346052 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163496 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95752 kB' 'KernelStack: 6480 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.954 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.955 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.955 06:34:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.955 06:34:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.955 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.955 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.955 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.955 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.955 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.955 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.955 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.955 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.955 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8989236 kB' 'MemAvailable: 10501400 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497872 kB' 'Inactive: 1346052 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163496 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95752 kB' 'KernelStack: 6480 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.955 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.956 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.956 06:34:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.956 nr_hugepages=512 00:04:24.956 06:34:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:24.956 resv_hugepages=0 00:04:24.956 06:34:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.956 06:34:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.956 surplus_hugepages=0 00:04:24.956 anon_hugepages=0 00:04:24.956 06:34:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.956 06:34:38 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.956 06:34:38 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:24.956 06:34:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.956 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.956 06:34:38 -- setup/common.sh@18 -- # local node= 00:04:24.956 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.956 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.956 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.956 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.956 06:34:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.956 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.956 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8989236 kB' 'MemAvailable: 10501400 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 1346052 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163492 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95748 kB' 'KernelStack: 6448 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.956 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 06:34:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.958 06:34:38 -- setup/common.sh@33 -- # echo 512 00:04:24.958 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.958 06:34:38 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.958 06:34:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.958 06:34:38 -- setup/hugepages.sh@27 -- # local node 00:04:24.958 06:34:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.958 06:34:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.958 06:34:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.958 06:34:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.958 06:34:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.958 06:34:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.958 06:34:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.958 06:34:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.958 06:34:38 -- setup/common.sh@18 -- # local node=0 00:04:24.958 06:34:38 -- setup/common.sh@19 -- # local var val 00:04:24.958 06:34:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.958 06:34:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.958 06:34:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.958 06:34:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.958 06:34:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.958 06:34:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8989236 kB' 'MemUsed: 3249876 kB' 'SwapCached: 0 kB' 'Active: 498076 kB' 'Inactive: 1346052 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1725724 kB' 'Mapped: 50788 kB' 'AnonPages: 120036 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163492 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.958 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.958 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # continue 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.959 06:34:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.959 06:34:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.959 06:34:38 -- setup/common.sh@33 -- # echo 0 00:04:24.959 06:34:38 -- setup/common.sh@33 -- # return 0 00:04:24.959 06:34:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.959 06:34:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.959 06:34:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.959 06:34:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.959 node0=512 expecting 512 00:04:24.959 06:34:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.959 06:34:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.959 00:04:24.959 real 0m0.531s 00:04:24.959 user 0m0.262s 00:04:24.959 sys 0m0.305s 00:04:24.959 06:34:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.959 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.959 ************************************ 00:04:24.959 END TEST custom_alloc 00:04:24.959 ************************************ 00:04:24.959 06:34:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:24.959 06:34:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.959 06:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.959 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.959 ************************************ 00:04:24.959 START TEST no_shrink_alloc 00:04:24.959 ************************************ 00:04:24.959 06:34:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:24.959 06:34:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:24.959 06:34:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.959 06:34:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.959 06:34:38 -- setup/hugepages.sh@51 -- # shift 00:04:24.959 06:34:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.959 06:34:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.959 06:34:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.959 06:34:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.959 06:34:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.959 06:34:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.959 06:34:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.959 06:34:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.959 06:34:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.959 06:34:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.959 06:34:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.959 06:34:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.959 06:34:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.959 06:34:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.959 06:34:38 -- setup/hugepages.sh@73 -- # return 0 00:04:24.959 06:34:38 -- setup/hugepages.sh@198 -- # setup output 00:04:24.959 06:34:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.959 06:34:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.218 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.218 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.480 06:34:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.480 06:34:39 -- setup/hugepages.sh@89 -- # local node 00:04:25.480 06:34:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.481 06:34:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.481 06:34:39 -- setup/hugepages.sh@92 -- # local surp 00:04:25.481 06:34:39 -- setup/hugepages.sh@93 -- # local resv 00:04:25.481 06:34:39 -- setup/hugepages.sh@94 -- # local anon 00:04:25.481 06:34:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.481 06:34:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.481 06:34:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.481 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:25.481 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:25.481 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.481 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.481 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.481 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.481 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.481 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7941064 kB' 'MemAvailable: 9453228 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498376 kB' 'Inactive: 1346052 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120240 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163388 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95644 kB' 'KernelStack: 6456 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.481 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.481 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.482 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:25.482 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:25.482 06:34:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.482 06:34:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.482 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.482 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:25.482 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:25.482 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.482 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.482 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.482 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.482 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.482 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7941064 kB' 'MemAvailable: 9453228 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 498100 kB' 'Inactive: 1346052 kB' 'Active(anon): 128912 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163388 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95644 kB' 'KernelStack: 6496 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.482 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.482 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.483 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.483 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:25.483 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:25.483 06:34:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.483 06:34:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.483 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.483 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:25.483 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:25.483 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.483 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.483 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.483 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.483 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.483 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.483 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7942508 kB' 'MemAvailable: 9454672 kB' 'Buffers: 3704 kB' 'Cached: 1722020 kB' 'SwapCached: 0 kB' 'Active: 497776 kB' 'Inactive: 1346052 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 51048 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163384 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95640 kB' 'KernelStack: 6512 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.484 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.484 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.485 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:25.485 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:25.485 06:34:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.485 nr_hugepages=1024 00:04:25.485 06:34:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.485 resv_hugepages=0 00:04:25.485 06:34:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.485 surplus_hugepages=0 00:04:25.485 06:34:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.485 06:34:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.485 anon_hugepages=0 00:04:25.485 06:34:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.485 06:34:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.485 06:34:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.485 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.485 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:25.485 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:25.485 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.485 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.485 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.485 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.485 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.485 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7942800 kB' 'MemAvailable: 9454964 kB' 'Buffers: 3704 kB' 'Cached: 1722024 kB' 'SwapCached: 0 kB' 'Active: 498048 kB' 'Inactive: 1346052 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163380 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95636 kB' 'KernelStack: 6496 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.485 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.485 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.486 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.486 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.486 06:34:39 -- setup/common.sh@33 -- # echo 1024 00:04:25.486 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:25.486 06:34:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.486 06:34:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.486 06:34:39 -- setup/hugepages.sh@27 -- # local node 00:04:25.486 06:34:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.486 06:34:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.486 06:34:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.486 06:34:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.486 06:34:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.486 06:34:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.487 06:34:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.487 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.487 06:34:39 -- setup/common.sh@18 -- # local node=0 00:04:25.487 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:25.487 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.487 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.487 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.487 06:34:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.487 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.487 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7942800 kB' 'MemUsed: 4296312 kB' 'SwapCached: 0 kB' 'Active: 497972 kB' 'Inactive: 1346052 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1725728 kB' 'Mapped: 50788 kB' 'AnonPages: 119944 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163380 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # continue 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.487 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.487 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.487 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:25.488 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:25.488 06:34:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.488 06:34:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.488 06:34:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.488 06:34:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.488 06:34:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.488 node0=1024 expecting 1024 00:04:25.488 06:34:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.488 06:34:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.488 06:34:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.488 06:34:39 -- setup/hugepages.sh@202 -- # setup output 00:04:25.488 06:34:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.488 06:34:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.008 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.008 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.008 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.008 06:34:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.008 06:34:39 -- setup/hugepages.sh@89 -- # local node 00:04:26.008 06:34:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.008 06:34:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.008 06:34:39 -- setup/hugepages.sh@92 -- # local surp 00:04:26.008 06:34:39 -- setup/hugepages.sh@93 -- # local resv 00:04:26.008 06:34:39 -- setup/hugepages.sh@94 -- # local anon 00:04:26.008 06:34:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.008 06:34:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.008 06:34:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.008 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:26.008 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:26.008 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.008 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.008 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.008 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.008 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.008 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.008 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.008 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7938392 kB' 'MemAvailable: 9450560 kB' 'Buffers: 3704 kB' 'Cached: 1722024 kB' 'SwapCached: 0 kB' 'Active: 498796 kB' 'Inactive: 1346056 kB' 'Active(anon): 129608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120988 kB' 'Mapped: 50940 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163404 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95660 kB' 'KernelStack: 6556 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.009 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.009 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.010 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:26.010 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:26.010 06:34:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.010 06:34:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.010 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.010 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:26.010 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:26.010 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.010 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.010 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.010 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.010 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.010 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7938392 kB' 'MemAvailable: 9450560 kB' 'Buffers: 3704 kB' 'Cached: 1722024 kB' 'SwapCached: 0 kB' 'Active: 498184 kB' 'Inactive: 1346056 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120308 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163420 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95676 kB' 'KernelStack: 6500 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.010 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.010 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.011 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:26.011 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:26.011 06:34:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.011 06:34:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.011 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.011 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:26.011 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:26.011 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.011 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.011 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.011 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.011 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.011 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7938392 kB' 'MemAvailable: 9450560 kB' 'Buffers: 3704 kB' 'Cached: 1722024 kB' 'SwapCached: 0 kB' 'Active: 498128 kB' 'Inactive: 1346056 kB' 'Active(anon): 128940 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120292 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163412 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95668 kB' 'KernelStack: 6484 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.011 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.011 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.012 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.012 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.012 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:26.012 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:26.012 nr_hugepages=1024 00:04:26.012 resv_hugepages=0 00:04:26.013 surplus_hugepages=0 00:04:26.013 anon_hugepages=0 00:04:26.013 06:34:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.013 06:34:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.013 06:34:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.013 06:34:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.013 06:34:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.013 06:34:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.013 06:34:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.013 06:34:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.013 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.013 06:34:39 -- setup/common.sh@18 -- # local node= 00:04:26.013 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:26.013 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.013 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.013 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.013 06:34:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.013 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.013 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7938392 kB' 'MemAvailable: 9450560 kB' 'Buffers: 3704 kB' 'Cached: 1722024 kB' 'SwapCached: 0 kB' 'Active: 498560 kB' 'Inactive: 1346056 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118616 kB' 'Mapped: 50268 kB' 'Shmem: 10484 kB' 'KReclaimable: 67744 kB' 'Slab: 163412 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95668 kB' 'KernelStack: 6468 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 318824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 3997696 kB' 'DirectMap1G: 10485760 kB' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.013 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.013 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.014 06:34:39 -- setup/common.sh@33 -- # echo 1024 00:04:26.014 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:26.014 06:34:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.014 06:34:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.014 06:34:39 -- setup/hugepages.sh@27 -- # local node 00:04:26.014 06:34:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.014 06:34:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.014 06:34:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.014 06:34:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.014 06:34:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.014 06:34:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.014 06:34:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.014 06:34:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.014 06:34:39 -- setup/common.sh@18 -- # local node=0 00:04:26.014 06:34:39 -- setup/common.sh@19 -- # local var val 00:04:26.014 06:34:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.014 06:34:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.014 06:34:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.014 06:34:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.014 06:34:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.014 06:34:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7938976 kB' 'MemUsed: 4300136 kB' 'SwapCached: 0 kB' 'Active: 495460 kB' 'Inactive: 1346056 kB' 'Active(anon): 126272 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1346056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1725728 kB' 'Mapped: 49940 kB' 'AnonPages: 117352 kB' 'Shmem: 10484 kB' 'KernelStack: 6404 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67744 kB' 'Slab: 163328 kB' 'SReclaimable: 67744 kB' 'SUnreclaim: 95584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.014 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.014 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # continue 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.015 06:34:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.015 06:34:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.015 06:34:39 -- setup/common.sh@33 -- # echo 0 00:04:26.015 06:34:39 -- setup/common.sh@33 -- # return 0 00:04:26.015 06:34:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.015 06:34:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.015 06:34:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.015 06:34:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.015 06:34:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.015 node0=1024 expecting 1024 00:04:26.015 06:34:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.015 00:04:26.015 real 0m1.067s 00:04:26.015 user 0m0.538s 00:04:26.015 sys 0m0.601s 00:04:26.015 06:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.015 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:26.015 ************************************ 00:04:26.015 END TEST no_shrink_alloc 00:04:26.015 ************************************ 00:04:26.015 06:34:39 -- setup/hugepages.sh@217 -- # clear_hp 00:04:26.015 06:34:39 -- setup/hugepages.sh@37 -- # local node hp 00:04:26.015 06:34:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.015 06:34:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.015 06:34:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:26.015 06:34:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.015 06:34:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:26.015 06:34:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.015 06:34:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.015 ************************************ 00:04:26.015 END TEST hugepages 00:04:26.015 ************************************ 00:04:26.015 00:04:26.015 real 0m4.911s 00:04:26.015 user 0m2.331s 00:04:26.015 sys 0m2.633s 00:04:26.015 06:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.015 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:26.015 06:34:39 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.015 06:34:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.015 06:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.016 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:26.274 ************************************ 00:04:26.274 START TEST driver 00:04:26.274 ************************************ 00:04:26.274 06:34:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.274 * Looking for test storage... 00:04:26.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.274 06:34:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.274 06:34:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.274 06:34:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.274 06:34:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.274 06:34:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.274 06:34:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.274 06:34:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.274 06:34:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.274 06:34:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.274 06:34:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.274 06:34:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.274 06:34:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.274 06:34:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.274 06:34:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.274 06:34:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.274 06:34:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.274 06:34:40 -- scripts/common.sh@344 -- # : 1 00:04:26.274 06:34:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.274 06:34:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.274 06:34:40 -- scripts/common.sh@364 -- # decimal 1 00:04:26.274 06:34:40 -- scripts/common.sh@352 -- # local d=1 00:04:26.274 06:34:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.274 06:34:40 -- scripts/common.sh@354 -- # echo 1 00:04:26.274 06:34:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.274 06:34:40 -- scripts/common.sh@365 -- # decimal 2 00:04:26.274 06:34:40 -- scripts/common.sh@352 -- # local d=2 00:04:26.274 06:34:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.274 06:34:40 -- scripts/common.sh@354 -- # echo 2 00:04:26.274 06:34:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.274 06:34:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.274 06:34:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.274 06:34:40 -- scripts/common.sh@367 -- # return 0 00:04:26.274 06:34:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.274 06:34:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.275 --rc genhtml_branch_coverage=1 00:04:26.275 --rc genhtml_function_coverage=1 00:04:26.275 --rc genhtml_legend=1 00:04:26.275 --rc geninfo_all_blocks=1 00:04:26.275 --rc geninfo_unexecuted_blocks=1 00:04:26.275 00:04:26.275 ' 00:04:26.275 06:34:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.275 --rc genhtml_branch_coverage=1 00:04:26.275 --rc genhtml_function_coverage=1 00:04:26.275 --rc genhtml_legend=1 00:04:26.275 --rc geninfo_all_blocks=1 00:04:26.275 --rc geninfo_unexecuted_blocks=1 00:04:26.275 00:04:26.275 ' 00:04:26.275 06:34:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.275 --rc genhtml_branch_coverage=1 00:04:26.275 --rc genhtml_function_coverage=1 00:04:26.275 --rc genhtml_legend=1 00:04:26.275 --rc geninfo_all_blocks=1 00:04:26.275 --rc geninfo_unexecuted_blocks=1 00:04:26.275 00:04:26.275 ' 00:04:26.275 06:34:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.275 --rc genhtml_branch_coverage=1 00:04:26.275 --rc genhtml_function_coverage=1 00:04:26.275 --rc genhtml_legend=1 00:04:26.275 --rc geninfo_all_blocks=1 00:04:26.275 --rc geninfo_unexecuted_blocks=1 00:04:26.275 00:04:26.275 ' 00:04:26.275 06:34:40 -- setup/driver.sh@68 -- # setup reset 00:04:26.275 06:34:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.275 06:34:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.843 06:34:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.843 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.843 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.843 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.843 ************************************ 00:04:26.843 START TEST guess_driver 00:04:26.843 ************************************ 00:04:26.843 06:34:40 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:26.843 06:34:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.843 06:34:40 -- setup/driver.sh@47 -- # local fail=0 00:04:26.843 06:34:40 -- setup/driver.sh@49 -- # pick_driver 00:04:26.843 06:34:40 -- setup/driver.sh@36 -- # vfio 00:04:26.843 06:34:40 -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.843 06:34:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.843 06:34:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.843 06:34:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.843 06:34:40 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:26.843 06:34:40 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:26.843 06:34:40 -- setup/driver.sh@32 -- # return 1 00:04:26.843 06:34:40 -- setup/driver.sh@38 -- # uio 00:04:26.843 06:34:40 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:26.843 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:26.843 06:34:40 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.843 Looking for driver=uio_pci_generic 00:04:26.843 06:34:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:26.843 06:34:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.843 06:34:40 -- setup/driver.sh@45 -- # setup output config 00:04:26.843 06:34:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.843 06:34:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.779 06:34:41 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:27.779 06:34:41 -- setup/driver.sh@58 -- # continue 00:04:27.779 06:34:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.779 06:34:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.779 06:34:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.779 06:34:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.779 06:34:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.779 06:34:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.779 06:34:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.779 06:34:41 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:27.779 06:34:41 -- setup/driver.sh@65 -- # setup reset 00:04:27.779 06:34:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.779 06:34:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.347 00:04:28.347 real 0m1.438s 00:04:28.347 user 0m0.542s 00:04:28.347 sys 0m0.890s 00:04:28.347 ************************************ 00:04:28.347 END TEST guess_driver 00:04:28.347 ************************************ 00:04:28.347 06:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.347 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.347 00:04:28.347 real 0m2.238s 00:04:28.347 user 0m0.860s 00:04:28.347 sys 0m1.439s 00:04:28.347 06:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.347 ************************************ 00:04:28.347 END TEST driver 00:04:28.347 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.347 ************************************ 00:04:28.347 06:34:42 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:28.347 06:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.347 06:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.347 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.347 ************************************ 00:04:28.347 START TEST devices 00:04:28.347 ************************************ 00:04:28.347 06:34:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:28.607 * Looking for test storage... 00:04:28.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.607 06:34:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:28.607 06:34:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:28.607 06:34:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:28.607 06:34:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:28.607 06:34:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:28.607 06:34:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:28.607 06:34:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:28.607 06:34:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:28.607 06:34:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:28.607 06:34:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.607 06:34:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:28.607 06:34:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:28.607 06:34:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:28.607 06:34:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:28.607 06:34:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:28.607 06:34:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:28.607 06:34:42 -- scripts/common.sh@344 -- # : 1 00:04:28.607 06:34:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:28.607 06:34:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.607 06:34:42 -- scripts/common.sh@364 -- # decimal 1 00:04:28.607 06:34:42 -- scripts/common.sh@352 -- # local d=1 00:04:28.607 06:34:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.607 06:34:42 -- scripts/common.sh@354 -- # echo 1 00:04:28.607 06:34:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:28.607 06:34:42 -- scripts/common.sh@365 -- # decimal 2 00:04:28.607 06:34:42 -- scripts/common.sh@352 -- # local d=2 00:04:28.607 06:34:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.607 06:34:42 -- scripts/common.sh@354 -- # echo 2 00:04:28.607 06:34:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:28.607 06:34:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:28.607 06:34:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:28.607 06:34:42 -- scripts/common.sh@367 -- # return 0 00:04:28.607 06:34:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.607 06:34:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.607 --rc genhtml_branch_coverage=1 00:04:28.607 --rc genhtml_function_coverage=1 00:04:28.607 --rc genhtml_legend=1 00:04:28.607 --rc geninfo_all_blocks=1 00:04:28.607 --rc geninfo_unexecuted_blocks=1 00:04:28.607 00:04:28.607 ' 00:04:28.607 06:34:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.607 --rc genhtml_branch_coverage=1 00:04:28.607 --rc genhtml_function_coverage=1 00:04:28.607 --rc genhtml_legend=1 00:04:28.607 --rc geninfo_all_blocks=1 00:04:28.607 --rc geninfo_unexecuted_blocks=1 00:04:28.607 00:04:28.607 ' 00:04:28.607 06:34:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.607 --rc genhtml_branch_coverage=1 00:04:28.607 --rc genhtml_function_coverage=1 00:04:28.607 --rc genhtml_legend=1 00:04:28.607 --rc geninfo_all_blocks=1 00:04:28.607 --rc geninfo_unexecuted_blocks=1 00:04:28.607 00:04:28.607 ' 00:04:28.607 06:34:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.607 --rc genhtml_branch_coverage=1 00:04:28.607 --rc genhtml_function_coverage=1 00:04:28.607 --rc genhtml_legend=1 00:04:28.607 --rc geninfo_all_blocks=1 00:04:28.607 --rc geninfo_unexecuted_blocks=1 00:04:28.607 00:04:28.607 ' 00:04:28.607 06:34:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:28.607 06:34:42 -- setup/devices.sh@192 -- # setup reset 00:04:28.607 06:34:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.607 06:34:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.544 06:34:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:29.544 06:34:43 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:29.544 06:34:43 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:29.544 06:34:43 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:29.544 06:34:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:29.544 06:34:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:29.544 06:34:43 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:29.544 06:34:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:29.544 06:34:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:29.544 06:34:43 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:29.544 06:34:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:29.544 06:34:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:29.544 06:34:43 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:29.544 06:34:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:29.544 06:34:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:29.544 06:34:43 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:29.544 06:34:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:29.544 06:34:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:29.544 06:34:43 -- setup/devices.sh@196 -- # blocks=() 00:04:29.544 06:34:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:29.544 06:34:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:29.544 06:34:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:29.544 06:34:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:29.544 06:34:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:29.544 06:34:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:29.544 06:34:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:29.544 06:34:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:29.544 06:34:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:29.544 No valid GPT data, bailing 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # pt= 00:04:29.544 06:34:43 -- scripts/common.sh@394 -- # return 1 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:29.544 06:34:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:29.544 06:34:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:29.544 06:34:43 -- setup/common.sh@80 -- # echo 5368709120 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:29.544 06:34:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.544 06:34:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:29.544 06:34:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:29.544 06:34:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:29.544 06:34:43 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:29.544 06:34:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:29.544 No valid GPT data, bailing 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # pt= 00:04:29.544 06:34:43 -- scripts/common.sh@394 -- # return 1 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:29.544 06:34:43 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:29.544 06:34:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:29.544 06:34:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.544 06:34:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.544 06:34:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:29.544 06:34:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:29.544 06:34:43 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:29.544 06:34:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:29.544 No valid GPT data, bailing 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # pt= 00:04:29.544 06:34:43 -- scripts/common.sh@394 -- # return 1 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:29.544 06:34:43 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:29.544 06:34:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:29.544 06:34:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.544 06:34:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.544 06:34:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:29.544 06:34:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:29.544 06:34:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:29.544 06:34:43 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:29.544 06:34:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:29.544 No valid GPT data, bailing 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:29.544 06:34:43 -- scripts/common.sh@393 -- # pt= 00:04:29.544 06:34:43 -- scripts/common.sh@394 -- # return 1 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:29.544 06:34:43 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:29.544 06:34:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:29.544 06:34:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:29.544 06:34:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.544 06:34:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.544 06:34:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:29.544 06:34:43 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:29.544 06:34:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:29.544 06:34:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:29.544 06:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.544 06:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.544 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 ************************************ 00:04:29.803 START TEST nvme_mount 00:04:29.803 ************************************ 00:04:29.803 06:34:43 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:29.803 06:34:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:29.803 06:34:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:29.803 06:34:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.804 06:34:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.804 06:34:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:29.804 06:34:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.804 06:34:43 -- setup/common.sh@40 -- # local part_no=1 00:04:29.804 06:34:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.804 06:34:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.804 06:34:43 -- setup/common.sh@44 -- # parts=() 00:04:29.804 06:34:43 -- setup/common.sh@44 -- # local parts 00:04:29.804 06:34:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.804 06:34:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.804 06:34:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.804 06:34:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.804 06:34:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.804 06:34:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.804 06:34:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.804 06:34:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.740 Creating new GPT entries in memory. 00:04:30.740 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.740 other utilities. 00:04:30.740 06:34:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.740 06:34:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.740 06:34:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.740 06:34:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.740 06:34:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.675 Creating new GPT entries in memory. 00:04:31.675 The operation has completed successfully. 00:04:31.675 06:34:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.675 06:34:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.675 06:34:45 -- setup/common.sh@62 -- # wait 53806 00:04:31.675 06:34:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.675 06:34:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:31.675 06:34:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.675 06:34:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.675 06:34:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.675 06:34:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.933 06:34:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.933 06:34:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.933 06:34:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.933 06:34:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.933 06:34:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.933 06:34:45 -- setup/devices.sh@53 -- # local found=0 00:04:31.933 06:34:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.933 06:34:45 -- setup/devices.sh@56 -- # : 00:04:31.933 06:34:45 -- setup/devices.sh@59 -- # local pci status 00:04:31.933 06:34:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.933 06:34:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.933 06:34:45 -- setup/devices.sh@47 -- # setup output config 00:04:31.933 06:34:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.933 06:34:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.933 06:34:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.933 06:34:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.933 06:34:45 -- setup/devices.sh@63 -- # found=1 00:04:31.933 06:34:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.933 06:34:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.933 06:34:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.192 06:34:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.192 06:34:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.451 06:34:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.451 06:34:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.451 06:34:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.451 06:34:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:32.451 06:34:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.451 06:34:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.451 06:34:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.451 06:34:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.451 06:34:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.451 06:34:46 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.451 06:34:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.451 06:34:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.451 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.451 06:34:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.451 06:34:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.710 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.710 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.710 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.710 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.710 06:34:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:32.710 06:34:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:32.710 06:34:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.710 06:34:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.710 06:34:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.710 06:34:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.710 06:34:46 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.710 06:34:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:32.710 06:34:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.710 06:34:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.710 06:34:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.710 06:34:46 -- setup/devices.sh@53 -- # local found=0 00:04:32.710 06:34:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.710 06:34:46 -- setup/devices.sh@56 -- # : 00:04:32.710 06:34:46 -- setup/devices.sh@59 -- # local pci status 00:04:32.710 06:34:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.710 06:34:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:32.710 06:34:46 -- setup/devices.sh@47 -- # setup output config 00:04:32.710 06:34:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.710 06:34:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.969 06:34:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.969 06:34:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.969 06:34:46 -- setup/devices.sh@63 -- # found=1 00:04:32.969 06:34:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.969 06:34:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.969 06:34:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.228 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.487 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.487 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.487 06:34:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.487 06:34:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:33.487 06:34:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.487 06:34:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.487 06:34:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.487 06:34:47 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.487 06:34:47 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:33.487 06:34:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:33.487 06:34:47 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:33.487 06:34:47 -- setup/devices.sh@50 -- # local mount_point= 00:04:33.487 06:34:47 -- setup/devices.sh@51 -- # local test_file= 00:04:33.487 06:34:47 -- setup/devices.sh@53 -- # local found=0 00:04:33.487 06:34:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.487 06:34:47 -- setup/devices.sh@59 -- # local pci status 00:04:33.487 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.487 06:34:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:33.487 06:34:47 -- setup/devices.sh@47 -- # setup output config 00:04:33.487 06:34:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.487 06:34:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.746 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.746 06:34:47 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:33.746 06:34:47 -- setup/devices.sh@63 -- # found=1 00:04:33.746 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.746 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.746 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.005 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.005 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.265 06:34:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:34.265 06:34:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.265 06:34:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.265 06:34:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.265 06:34:48 -- setup/devices.sh@68 -- # return 0 00:04:34.265 06:34:48 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:34.265 06:34:48 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.265 06:34:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.265 06:34:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.265 06:34:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.265 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.265 00:04:34.265 real 0m4.558s 00:04:34.265 user 0m1.003s 00:04:34.265 sys 0m1.228s 00:04:34.265 06:34:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.265 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.265 ************************************ 00:04:34.265 END TEST nvme_mount 00:04:34.265 ************************************ 00:04:34.265 06:34:48 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:34.265 06:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.265 06:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.265 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.265 ************************************ 00:04:34.265 START TEST dm_mount 00:04:34.265 ************************************ 00:04:34.265 06:34:48 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:34.265 06:34:48 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:34.265 06:34:48 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:34.265 06:34:48 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:34.265 06:34:48 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:34.265 06:34:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:34.265 06:34:48 -- setup/common.sh@40 -- # local part_no=2 00:04:34.265 06:34:48 -- setup/common.sh@41 -- # local size=1073741824 00:04:34.265 06:34:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:34.265 06:34:48 -- setup/common.sh@44 -- # parts=() 00:04:34.265 06:34:48 -- setup/common.sh@44 -- # local parts 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.265 06:34:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part++ )) 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.265 06:34:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part++ )) 00:04:34.265 06:34:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.265 06:34:48 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:34.265 06:34:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:34.265 06:34:48 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:35.202 Creating new GPT entries in memory. 00:04:35.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.202 other utilities. 00:04:35.202 06:34:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.202 06:34:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.202 06:34:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.202 06:34:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.202 06:34:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:36.579 Creating new GPT entries in memory. 00:04:36.579 The operation has completed successfully. 00:04:36.579 06:34:50 -- setup/common.sh@57 -- # (( part++ )) 00:04:36.579 06:34:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.579 06:34:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.579 06:34:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.579 06:34:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:37.515 The operation has completed successfully. 00:04:37.515 06:34:51 -- setup/common.sh@57 -- # (( part++ )) 00:04:37.515 06:34:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.515 06:34:51 -- setup/common.sh@62 -- # wait 54266 00:04:37.515 06:34:51 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:37.515 06:34:51 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.515 06:34:51 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:37.515 06:34:51 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:37.515 06:34:51 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:37.515 06:34:51 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:37.515 06:34:51 -- setup/devices.sh@161 -- # break 00:04:37.515 06:34:51 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:37.515 06:34:51 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:37.515 06:34:51 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:37.515 06:34:51 -- setup/devices.sh@166 -- # dm=dm-0 00:04:37.515 06:34:51 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:37.515 06:34:51 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:37.515 06:34:51 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.515 06:34:51 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:37.515 06:34:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.515 06:34:51 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:37.515 06:34:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:37.515 06:34:51 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.515 06:34:51 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:37.515 06:34:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:37.515 06:34:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:37.515 06:34:51 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.515 06:34:51 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:37.515 06:34:51 -- setup/devices.sh@53 -- # local found=0 00:04:37.515 06:34:51 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:37.515 06:34:51 -- setup/devices.sh@56 -- # : 00:04:37.515 06:34:51 -- setup/devices.sh@59 -- # local pci status 00:04:37.515 06:34:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:37.516 06:34:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.516 06:34:51 -- setup/devices.sh@47 -- # setup output config 00:04:37.516 06:34:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.516 06:34:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.774 06:34:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.774 06:34:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:37.774 06:34:51 -- setup/devices.sh@63 -- # found=1 00:04:37.774 06:34:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.774 06:34:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.774 06:34:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.032 06:34:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.032 06:34:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.032 06:34:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.032 06:34:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.032 06:34:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.032 06:34:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:38.032 06:34:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.032 06:34:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.032 06:34:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:38.032 06:34:52 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.032 06:34:52 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.032 06:34:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:38.032 06:34:52 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.032 06:34:52 -- setup/devices.sh@50 -- # local mount_point= 00:04:38.032 06:34:52 -- setup/devices.sh@51 -- # local test_file= 00:04:38.032 06:34:52 -- setup/devices.sh@53 -- # local found=0 00:04:38.032 06:34:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.032 06:34:52 -- setup/devices.sh@59 -- # local pci status 00:04:38.032 06:34:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.032 06:34:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:38.032 06:34:52 -- setup/devices.sh@47 -- # setup output config 00:04:38.290 06:34:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.290 06:34:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.290 06:34:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.290 06:34:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:38.290 06:34:52 -- setup/devices.sh@63 -- # found=1 00:04:38.290 06:34:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.290 06:34:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.290 06:34:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.549 06:34:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.549 06:34:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.809 06:34:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:38.809 06:34:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.809 06:34:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.809 06:34:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.809 06:34:52 -- setup/devices.sh@68 -- # return 0 00:04:38.809 06:34:52 -- setup/devices.sh@187 -- # cleanup_dm 00:04:38.809 06:34:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.809 06:34:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.809 06:34:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:38.809 06:34:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.809 06:34:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:38.809 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.809 06:34:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.809 06:34:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:38.809 00:04:38.809 real 0m4.584s 00:04:38.809 user 0m0.689s 00:04:38.809 sys 0m0.830s 00:04:38.809 06:34:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.809 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.809 ************************************ 00:04:38.809 END TEST dm_mount 00:04:38.809 ************************************ 00:04:38.809 06:34:52 -- setup/devices.sh@1 -- # cleanup 00:04:38.809 06:34:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:38.809 06:34:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.809 06:34:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.809 06:34:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.809 06:34:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.809 06:34:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.083 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.083 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.083 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.083 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.083 06:34:53 -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.083 06:34:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.357 06:34:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.357 06:34:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.357 06:34:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.357 06:34:53 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.357 06:34:53 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.357 ************************************ 00:04:39.357 END TEST devices 00:04:39.357 ************************************ 00:04:39.357 00:04:39.357 real 0m10.784s 00:04:39.357 user 0m2.382s 00:04:39.357 sys 0m2.715s 00:04:39.357 06:34:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.357 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.357 00:04:39.357 real 0m22.760s 00:04:39.357 user 0m7.732s 00:04:39.357 sys 0m9.449s 00:04:39.357 06:34:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.357 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.357 ************************************ 00:04:39.357 END TEST setup.sh 00:04:39.357 ************************************ 00:04:39.357 06:34:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:39.357 Hugepages 00:04:39.357 node hugesize free / total 00:04:39.357 node0 1048576kB 0 / 0 00:04:39.357 node0 2048kB 2048 / 2048 00:04:39.357 00:04:39.357 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.616 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:39.616 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:39.616 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:39.616 06:34:53 -- spdk/autotest.sh@128 -- # uname -s 00:04:39.616 06:34:53 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:39.616 06:34:53 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:39.616 06:34:53 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.443 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.443 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.443 06:34:54 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:41.819 06:34:55 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:41.819 06:34:55 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:41.819 06:34:55 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.819 06:34:55 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:41.819 06:34:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:41.819 06:34:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:41.819 06:34:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.819 06:34:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.819 06:34:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:41.819 06:34:55 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:41.819 06:34:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:41.820 06:34:55 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.079 Waiting for block devices as requested 00:04:42.079 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.079 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.079 06:34:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.079 06:34:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.079 06:34:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:42.079 06:34:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:42.079 06:34:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:42.079 06:34:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.079 06:34:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.079 06:34:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.079 06:34:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.079 06:34:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.079 06:34:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:42.079 06:34:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.079 06:34:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.338 06:34:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.338 06:34:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.338 06:34:56 -- common/autotest_common.sh@1552 -- # continue 00:04:42.338 06:34:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.338 06:34:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:42.338 06:34:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:42.338 06:34:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:42.338 06:34:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:42.338 06:34:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.338 06:34:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.338 06:34:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.338 06:34:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.338 06:34:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.338 06:34:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:42.338 06:34:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.338 06:34:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.338 06:34:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.338 06:34:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.338 06:34:56 -- common/autotest_common.sh@1552 -- # continue 00:04:42.338 06:34:56 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:42.338 06:34:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.338 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.338 06:34:56 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:42.338 06:34:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.338 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.338 06:34:56 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.165 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.165 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.165 06:34:57 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:43.165 06:34:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.165 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.165 06:34:57 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:43.165 06:34:57 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:43.165 06:34:57 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.165 06:34:57 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:43.165 06:34:57 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:43.165 06:34:57 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:43.165 06:34:57 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:43.165 06:34:57 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:43.165 06:34:57 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.165 06:34:57 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.165 06:34:57 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:43.424 06:34:57 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:43.424 06:34:57 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:43.424 06:34:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.424 06:34:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:43.424 06:34:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.424 06:34:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.424 06:34:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.424 06:34:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:43.424 06:34:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.424 06:34:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.424 06:34:57 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:43.424 06:34:57 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:43.424 06:34:57 -- common/autotest_common.sh@1588 -- # return 0 00:04:43.424 06:34:57 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:43.424 06:34:57 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:43.424 06:34:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:43.424 06:34:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:43.424 06:34:57 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:43.424 06:34:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.424 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.424 06:34:57 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.424 06:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.424 06:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.424 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.424 ************************************ 00:04:43.424 START TEST env 00:04:43.424 ************************************ 00:04:43.424 06:34:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.424 * Looking for test storage... 00:04:43.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:43.424 06:34:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.424 06:34:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.424 06:34:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.424 06:34:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.424 06:34:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.424 06:34:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.424 06:34:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.424 06:34:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.424 06:34:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.424 06:34:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.424 06:34:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.424 06:34:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.424 06:34:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.424 06:34:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.424 06:34:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.684 06:34:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.684 06:34:57 -- scripts/common.sh@344 -- # : 1 00:04:43.684 06:34:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.684 06:34:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.684 06:34:57 -- scripts/common.sh@364 -- # decimal 1 00:04:43.684 06:34:57 -- scripts/common.sh@352 -- # local d=1 00:04:43.684 06:34:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.684 06:34:57 -- scripts/common.sh@354 -- # echo 1 00:04:43.684 06:34:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.684 06:34:57 -- scripts/common.sh@365 -- # decimal 2 00:04:43.684 06:34:57 -- scripts/common.sh@352 -- # local d=2 00:04:43.684 06:34:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.684 06:34:57 -- scripts/common.sh@354 -- # echo 2 00:04:43.684 06:34:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.684 06:34:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.684 06:34:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.684 06:34:57 -- scripts/common.sh@367 -- # return 0 00:04:43.684 06:34:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.684 06:34:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.684 --rc genhtml_branch_coverage=1 00:04:43.684 --rc genhtml_function_coverage=1 00:04:43.684 --rc genhtml_legend=1 00:04:43.684 --rc geninfo_all_blocks=1 00:04:43.684 --rc geninfo_unexecuted_blocks=1 00:04:43.684 00:04:43.684 ' 00:04:43.684 06:34:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.684 --rc genhtml_branch_coverage=1 00:04:43.684 --rc genhtml_function_coverage=1 00:04:43.684 --rc genhtml_legend=1 00:04:43.684 --rc geninfo_all_blocks=1 00:04:43.684 --rc geninfo_unexecuted_blocks=1 00:04:43.684 00:04:43.684 ' 00:04:43.684 06:34:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.684 --rc genhtml_branch_coverage=1 00:04:43.684 --rc genhtml_function_coverage=1 00:04:43.684 --rc genhtml_legend=1 00:04:43.684 --rc geninfo_all_blocks=1 00:04:43.684 --rc geninfo_unexecuted_blocks=1 00:04:43.684 00:04:43.684 ' 00:04:43.684 06:34:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.684 --rc genhtml_branch_coverage=1 00:04:43.684 --rc genhtml_function_coverage=1 00:04:43.684 --rc genhtml_legend=1 00:04:43.684 --rc geninfo_all_blocks=1 00:04:43.684 --rc geninfo_unexecuted_blocks=1 00:04:43.684 00:04:43.684 ' 00:04:43.684 06:34:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.684 06:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.684 06:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.684 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.684 ************************************ 00:04:43.684 START TEST env_memory 00:04:43.684 ************************************ 00:04:43.684 06:34:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.684 00:04:43.684 00:04:43.684 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.684 http://cunit.sourceforge.net/ 00:04:43.684 00:04:43.684 00:04:43.684 Suite: memory 00:04:43.684 Test: alloc and free memory map ...[2024-12-14 06:34:57.489742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.684 passed 00:04:43.684 Test: mem map translation ...[2024-12-14 06:34:57.521326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.684 [2024-12-14 06:34:57.521532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.684 [2024-12-14 06:34:57.521776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.684 [2024-12-14 06:34:57.522059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.684 passed 00:04:43.684 Test: mem map registration ...[2024-12-14 06:34:57.586116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.684 [2024-12-14 06:34:57.586302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.684 passed 00:04:43.684 Test: mem map adjacent registrations ...passed 00:04:43.684 00:04:43.684 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.684 suites 1 1 n/a 0 0 00:04:43.684 tests 4 4 4 0 0 00:04:43.684 asserts 152 152 152 0 n/a 00:04:43.684 00:04:43.684 Elapsed time = 0.213 seconds 00:04:43.684 00:04:43.944 real 0m0.237s 00:04:43.944 user 0m0.213s 00:04:43.944 sys 0m0.018s 00:04:43.944 06:34:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.944 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.944 ************************************ 00:04:43.944 END TEST env_memory 00:04:43.944 ************************************ 00:04:43.944 06:34:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.944 06:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.944 06:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.944 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.944 ************************************ 00:04:43.944 START TEST env_vtophys 00:04:43.944 ************************************ 00:04:43.944 06:34:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.944 EAL: lib.eal log level changed from notice to debug 00:04:43.944 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 1 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 2 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 3 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 4 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 5 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 6 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 7 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 8 as core 0 on socket 0 00:04:43.944 EAL: Detected lcore 9 as core 0 on socket 0 00:04:43.944 EAL: Maximum logical cores by configuration: 128 00:04:43.944 EAL: Detected CPU lcores: 10 00:04:43.944 EAL: Detected NUMA nodes: 1 00:04:43.944 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:43.944 EAL: Detected shared linkage of DPDK 00:04:43.944 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.944 EAL: Selected IOVA mode 'PA' 00:04:43.944 EAL: Probing VFIO support... 00:04:43.944 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.944 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:43.944 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.944 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.944 EAL: Setting up physically contiguous memory... 00:04:43.944 EAL: Setting maximum number of open files to 524288 00:04:43.944 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.944 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.944 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.944 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.944 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.944 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.944 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.944 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.944 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.944 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.944 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.944 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.944 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.944 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.944 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.944 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.944 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.944 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.944 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.944 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.944 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.944 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.944 EAL: Hugepages will be freed exactly as allocated. 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: TSC frequency is ~2200000 KHz 00:04:43.944 EAL: Main lcore 0 is ready (tid=7f21cde81a00;cpuset=[0]) 00:04:43.944 EAL: Trying to obtain current memory policy. 00:04:43.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.944 EAL: Restoring previous memory policy: 0 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.944 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.944 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.944 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.944 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:43.944 00:04:43.944 00:04:43.944 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.944 http://cunit.sourceforge.net/ 00:04:43.944 00:04:43.944 00:04:43.944 Suite: components_suite 00:04:43.944 Test: vtophys_malloc_test ...passed 00:04:43.944 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.944 EAL: Restoring previous memory policy: 4 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.944 EAL: Trying to obtain current memory policy. 00:04:43.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.944 EAL: Restoring previous memory policy: 4 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.944 EAL: Trying to obtain current memory policy. 00:04:43.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.944 EAL: Restoring previous memory policy: 4 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.944 EAL: Trying to obtain current memory policy. 00:04:43.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.945 EAL: Restoring previous memory policy: 4 00:04:43.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.945 EAL: request: mp_malloc_sync 00:04:43.945 EAL: No shared files mode enabled, IPC is disabled 00:04:43.945 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.945 EAL: request: mp_malloc_sync 00:04:43.945 EAL: No shared files mode enabled, IPC is disabled 00:04:43.945 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.945 EAL: Trying to obtain current memory policy. 00:04:43.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.945 EAL: Restoring previous memory policy: 4 00:04:43.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.945 EAL: request: mp_malloc_sync 00:04:43.945 EAL: No shared files mode enabled, IPC is disabled 00:04:43.945 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.945 EAL: request: mp_malloc_sync 00:04:43.945 EAL: No shared files mode enabled, IPC is disabled 00:04:43.945 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.945 EAL: Trying to obtain current memory policy. 00:04:43.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.204 EAL: Restoring previous memory policy: 4 00:04:44.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 EAL: request: mp_malloc_sync 00:04:44.204 EAL: No shared files mode enabled, IPC is disabled 00:04:44.204 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 EAL: request: mp_malloc_sync 00:04:44.204 EAL: No shared files mode enabled, IPC is disabled 00:04:44.204 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.204 EAL: Trying to obtain current memory policy. 00:04:44.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.204 EAL: Restoring previous memory policy: 4 00:04:44.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 EAL: request: mp_malloc_sync 00:04:44.204 EAL: No shared files mode enabled, IPC is disabled 00:04:44.204 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 EAL: request: mp_malloc_sync 00:04:44.204 EAL: No shared files mode enabled, IPC is disabled 00:04:44.204 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.204 EAL: Trying to obtain current memory policy. 00:04:44.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.204 EAL: Restoring previous memory policy: 4 00:04:44.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 EAL: request: mp_malloc_sync 00:04:44.204 EAL: No shared files mode enabled, IPC is disabled 00:04:44.204 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.463 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.463 EAL: request: mp_malloc_sync 00:04:44.463 EAL: No shared files mode enabled, IPC is disabled 00:04:44.463 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.463 EAL: Trying to obtain current memory policy. 00:04:44.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.722 EAL: Restoring previous memory policy: 4 00:04:44.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.722 EAL: request: mp_malloc_sync 00:04:44.722 EAL: No shared files mode enabled, IPC is disabled 00:04:44.722 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.981 EAL: request: mp_malloc_sync 00:04:44.981 EAL: No shared files mode enabled, IPC is disabled 00:04:44.981 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.981 EAL: Trying to obtain current memory policy. 00:04:44.981 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.240 EAL: Restoring previous memory policy: 4 00:04:45.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.240 EAL: request: mp_malloc_sync 00:04:45.240 EAL: No shared files mode enabled, IPC is disabled 00:04:45.240 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.789 passed 00:04:45.789 00:04:45.789 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.789 suites 1 1 n/a 0 0 00:04:45.789 tests 2 2 2 0 0 00:04:45.789 asserts 5274 5274 5274 0 n/a 00:04:45.789 00:04:45.789 Elapsed time = 1.796 seconds 00:04:45.789 EAL: request: mp_malloc_sync 00:04:45.789 EAL: No shared files mode enabled, IPC is disabled 00:04:45.789 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.789 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.789 EAL: request: mp_malloc_sync 00:04:45.789 EAL: No shared files mode enabled, IPC is disabled 00:04:45.789 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.789 EAL: No shared files mode enabled, IPC is disabled 00:04:45.789 EAL: No shared files mode enabled, IPC is disabled 00:04:45.789 EAL: No shared files mode enabled, IPC is disabled 00:04:45.789 ************************************ 00:04:45.789 END TEST env_vtophys 00:04:45.789 ************************************ 00:04:45.789 00:04:45.789 real 0m2.001s 00:04:45.789 user 0m1.134s 00:04:45.789 sys 0m0.729s 00:04:45.789 06:34:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.789 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:46.068 06:34:59 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.068 06:34:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.068 06:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.068 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:46.068 ************************************ 00:04:46.068 START TEST env_pci 00:04:46.068 ************************************ 00:04:46.068 06:34:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.068 00:04:46.068 00:04:46.068 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.068 http://cunit.sourceforge.net/ 00:04:46.068 00:04:46.068 00:04:46.068 Suite: pci 00:04:46.068 Test: pci_hook ...[2024-12-14 06:34:59.804150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55410 has claimed it 00:04:46.068 passed 00:04:46.068 00:04:46.068 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.068 suites 1 1 n/a 0 0 00:04:46.068 tests 1 1 1 0 0 00:04:46.068 asserts 25 25 25 0 n/a 00:04:46.069 00:04:46.069 Elapsed time = 0.002 seconds 00:04:46.069 EAL: Cannot find device (10000:00:01.0) 00:04:46.069 EAL: Failed to attach device on primary process 00:04:46.069 ************************************ 00:04:46.069 END TEST env_pci 00:04:46.069 ************************************ 00:04:46.069 00:04:46.069 real 0m0.020s 00:04:46.069 user 0m0.009s 00:04:46.069 sys 0m0.011s 00:04:46.069 06:34:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.069 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:46.069 06:34:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.069 06:34:59 -- env/env.sh@15 -- # uname 00:04:46.069 06:34:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.069 06:34:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.069 06:34:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.069 06:34:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:46.069 06:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.069 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:46.069 ************************************ 00:04:46.069 START TEST env_dpdk_post_init 00:04:46.069 ************************************ 00:04:46.069 06:34:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.069 EAL: Detected CPU lcores: 10 00:04:46.069 EAL: Detected NUMA nodes: 1 00:04:46.069 EAL: Detected shared linkage of DPDK 00:04:46.069 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.069 EAL: Selected IOVA mode 'PA' 00:04:46.069 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.069 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:46.069 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:46.069 Starting DPDK initialization... 00:04:46.069 Starting SPDK post initialization... 00:04:46.069 SPDK NVMe probe 00:04:46.069 Attaching to 0000:00:06.0 00:04:46.069 Attaching to 0000:00:07.0 00:04:46.069 Attached to 0000:00:06.0 00:04:46.069 Attached to 0000:00:07.0 00:04:46.069 Cleaning up... 00:04:46.069 00:04:46.069 real 0m0.177s 00:04:46.069 user 0m0.041s 00:04:46.069 sys 0m0.037s 00:04:46.069 ************************************ 00:04:46.069 END TEST env_dpdk_post_init 00:04:46.069 ************************************ 00:04:46.069 06:35:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.069 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.328 06:35:00 -- env/env.sh@26 -- # uname 00:04:46.328 06:35:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:46.328 06:35:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.328 06:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.328 06:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.328 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.328 ************************************ 00:04:46.328 START TEST env_mem_callbacks 00:04:46.328 ************************************ 00:04:46.328 06:35:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.328 EAL: Detected CPU lcores: 10 00:04:46.328 EAL: Detected NUMA nodes: 1 00:04:46.328 EAL: Detected shared linkage of DPDK 00:04:46.328 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.328 EAL: Selected IOVA mode 'PA' 00:04:46.328 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.328 00:04:46.328 00:04:46.328 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.328 http://cunit.sourceforge.net/ 00:04:46.328 00:04:46.328 00:04:46.328 Suite: memory 00:04:46.328 Test: test ... 00:04:46.328 register 0x200000200000 2097152 00:04:46.328 malloc 3145728 00:04:46.328 register 0x200000400000 4194304 00:04:46.328 buf 0x200000500000 len 3145728 PASSED 00:04:46.328 malloc 64 00:04:46.328 buf 0x2000004fff40 len 64 PASSED 00:04:46.328 malloc 4194304 00:04:46.328 register 0x200000800000 6291456 00:04:46.328 buf 0x200000a00000 len 4194304 PASSED 00:04:46.328 free 0x200000500000 3145728 00:04:46.328 free 0x2000004fff40 64 00:04:46.328 unregister 0x200000400000 4194304 PASSED 00:04:46.328 free 0x200000a00000 4194304 00:04:46.328 unregister 0x200000800000 6291456 PASSED 00:04:46.328 malloc 8388608 00:04:46.328 register 0x200000400000 10485760 00:04:46.328 buf 0x200000600000 len 8388608 PASSED 00:04:46.328 free 0x200000600000 8388608 00:04:46.328 unregister 0x200000400000 10485760 PASSED 00:04:46.328 passed 00:04:46.328 00:04:46.328 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.328 suites 1 1 n/a 0 0 00:04:46.328 tests 1 1 1 0 0 00:04:46.328 asserts 15 15 15 0 n/a 00:04:46.329 00:04:46.329 Elapsed time = 0.009 seconds 00:04:46.329 00:04:46.329 real 0m0.148s 00:04:46.329 user 0m0.016s 00:04:46.329 sys 0m0.029s 00:04:46.329 06:35:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.329 ************************************ 00:04:46.329 END TEST env_mem_callbacks 00:04:46.329 ************************************ 00:04:46.329 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.329 ************************************ 00:04:46.329 END TEST env 00:04:46.329 ************************************ 00:04:46.329 00:04:46.329 real 0m3.051s 00:04:46.329 user 0m1.593s 00:04:46.329 sys 0m1.085s 00:04:46.329 06:35:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.329 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.588 06:35:00 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.588 06:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.588 06:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.588 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.588 ************************************ 00:04:46.588 START TEST rpc 00:04:46.588 ************************************ 00:04:46.588 06:35:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.588 * Looking for test storage... 00:04:46.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.589 06:35:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.589 06:35:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.589 06:35:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.589 06:35:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.589 06:35:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.589 06:35:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.589 06:35:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.589 06:35:00 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.589 06:35:00 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.589 06:35:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.589 06:35:00 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.589 06:35:00 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.589 06:35:00 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.589 06:35:00 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.589 06:35:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.589 06:35:00 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.589 06:35:00 -- scripts/common.sh@344 -- # : 1 00:04:46.589 06:35:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.589 06:35:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.589 06:35:00 -- scripts/common.sh@364 -- # decimal 1 00:04:46.589 06:35:00 -- scripts/common.sh@352 -- # local d=1 00:04:46.589 06:35:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.589 06:35:00 -- scripts/common.sh@354 -- # echo 1 00:04:46.589 06:35:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.589 06:35:00 -- scripts/common.sh@365 -- # decimal 2 00:04:46.589 06:35:00 -- scripts/common.sh@352 -- # local d=2 00:04:46.589 06:35:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.589 06:35:00 -- scripts/common.sh@354 -- # echo 2 00:04:46.589 06:35:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.589 06:35:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.589 06:35:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.589 06:35:00 -- scripts/common.sh@367 -- # return 0 00:04:46.589 06:35:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.589 06:35:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.589 --rc genhtml_branch_coverage=1 00:04:46.589 --rc genhtml_function_coverage=1 00:04:46.589 --rc genhtml_legend=1 00:04:46.589 --rc geninfo_all_blocks=1 00:04:46.589 --rc geninfo_unexecuted_blocks=1 00:04:46.589 00:04:46.589 ' 00:04:46.589 06:35:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.589 --rc genhtml_branch_coverage=1 00:04:46.589 --rc genhtml_function_coverage=1 00:04:46.589 --rc genhtml_legend=1 00:04:46.589 --rc geninfo_all_blocks=1 00:04:46.589 --rc geninfo_unexecuted_blocks=1 00:04:46.589 00:04:46.589 ' 00:04:46.589 06:35:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.589 --rc genhtml_branch_coverage=1 00:04:46.589 --rc genhtml_function_coverage=1 00:04:46.589 --rc genhtml_legend=1 00:04:46.589 --rc geninfo_all_blocks=1 00:04:46.589 --rc geninfo_unexecuted_blocks=1 00:04:46.589 00:04:46.589 ' 00:04:46.589 06:35:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.589 --rc genhtml_branch_coverage=1 00:04:46.589 --rc genhtml_function_coverage=1 00:04:46.589 --rc genhtml_legend=1 00:04:46.589 --rc geninfo_all_blocks=1 00:04:46.589 --rc geninfo_unexecuted_blocks=1 00:04:46.589 00:04:46.589 ' 00:04:46.589 06:35:00 -- rpc/rpc.sh@65 -- # spdk_pid=55532 00:04:46.589 06:35:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.589 06:35:00 -- rpc/rpc.sh@67 -- # waitforlisten 55532 00:04:46.589 06:35:00 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:46.589 06:35:00 -- common/autotest_common.sh@829 -- # '[' -z 55532 ']' 00:04:46.589 06:35:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.589 06:35:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.589 06:35:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.589 06:35:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.589 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.850 [2024-12-14 06:35:00.590669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.850 [2024-12-14 06:35:00.590979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55532 ] 00:04:46.850 [2024-12-14 06:35:00.723081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.850 [2024-12-14 06:35:00.817218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.850 [2024-12-14 06:35:00.817792] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:46.850 [2024-12-14 06:35:00.817852] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55532' to capture a snapshot of events at runtime. 00:04:46.850 [2024-12-14 06:35:00.818021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55532 for offline analysis/debug. 00:04:46.850 [2024-12-14 06:35:00.818117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.786 06:35:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.786 06:35:01 -- common/autotest_common.sh@862 -- # return 0 00:04:47.786 06:35:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.786 06:35:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.786 06:35:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:47.786 06:35:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:47.786 06:35:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.786 06:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.786 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.786 ************************************ 00:04:47.786 START TEST rpc_integrity 00:04:47.786 ************************************ 00:04:47.786 06:35:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:47.786 06:35:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.786 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.786 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.786 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.786 06:35:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.786 06:35:01 -- rpc/rpc.sh@13 -- # jq length 00:04:47.786 06:35:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.786 06:35:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.786 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.786 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.786 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.786 06:35:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:47.786 06:35:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.786 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.786 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.786 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.786 06:35:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.786 { 00:04:47.786 "aliases": [ 00:04:47.786 "aa585370-836e-48fa-886d-a9f93c78d7c2" 00:04:47.786 ], 00:04:47.786 "assigned_rate_limits": { 00:04:47.786 "r_mbytes_per_sec": 0, 00:04:47.786 "rw_ios_per_sec": 0, 00:04:47.786 "rw_mbytes_per_sec": 0, 00:04:47.786 "w_mbytes_per_sec": 0 00:04:47.786 }, 00:04:47.786 "block_size": 512, 00:04:47.787 "claimed": false, 00:04:47.787 "driver_specific": {}, 00:04:47.787 "memory_domains": [ 00:04:47.787 { 00:04:47.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.787 "dma_device_type": 2 00:04:47.787 } 00:04:47.787 ], 00:04:47.787 "name": "Malloc0", 00:04:47.787 "num_blocks": 16384, 00:04:47.787 "product_name": "Malloc disk", 00:04:47.787 "supported_io_types": { 00:04:47.787 "abort": true, 00:04:47.787 "compare": false, 00:04:47.787 "compare_and_write": false, 00:04:47.787 "flush": true, 00:04:47.787 "nvme_admin": false, 00:04:47.787 "nvme_io": false, 00:04:47.787 "read": true, 00:04:47.787 "reset": true, 00:04:47.787 "unmap": true, 00:04:47.787 "write": true, 00:04:47.787 "write_zeroes": true 00:04:47.787 }, 00:04:47.787 "uuid": "aa585370-836e-48fa-886d-a9f93c78d7c2", 00:04:47.787 "zoned": false 00:04:47.787 } 00:04:47.787 ]' 00:04:47.787 06:35:01 -- rpc/rpc.sh@17 -- # jq length 00:04:48.046 06:35:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.046 06:35:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:48.046 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.046 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.046 [2024-12-14 06:35:01.788520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:48.046 [2024-12-14 06:35:01.788600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.046 [2024-12-14 06:35:01.788638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e2880 00:04:48.046 [2024-12-14 06:35:01.788647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.046 [2024-12-14 06:35:01.790754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.046 [2024-12-14 06:35:01.790996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.046 Passthru0 00:04:48.046 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.046 06:35:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.046 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.046 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.046 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.046 06:35:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.046 { 00:04:48.046 "aliases": [ 00:04:48.046 "aa585370-836e-48fa-886d-a9f93c78d7c2" 00:04:48.046 ], 00:04:48.046 "assigned_rate_limits": { 00:04:48.046 "r_mbytes_per_sec": 0, 00:04:48.046 "rw_ios_per_sec": 0, 00:04:48.046 "rw_mbytes_per_sec": 0, 00:04:48.046 "w_mbytes_per_sec": 0 00:04:48.046 }, 00:04:48.046 "block_size": 512, 00:04:48.046 "claim_type": "exclusive_write", 00:04:48.046 "claimed": true, 00:04:48.046 "driver_specific": {}, 00:04:48.046 "memory_domains": [ 00:04:48.046 { 00:04:48.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.046 "dma_device_type": 2 00:04:48.046 } 00:04:48.046 ], 00:04:48.046 "name": "Malloc0", 00:04:48.046 "num_blocks": 16384, 00:04:48.046 "product_name": "Malloc disk", 00:04:48.046 "supported_io_types": { 00:04:48.046 "abort": true, 00:04:48.046 "compare": false, 00:04:48.046 "compare_and_write": false, 00:04:48.046 "flush": true, 00:04:48.046 "nvme_admin": false, 00:04:48.046 "nvme_io": false, 00:04:48.046 "read": true, 00:04:48.046 "reset": true, 00:04:48.046 "unmap": true, 00:04:48.046 "write": true, 00:04:48.046 "write_zeroes": true 00:04:48.046 }, 00:04:48.046 "uuid": "aa585370-836e-48fa-886d-a9f93c78d7c2", 00:04:48.046 "zoned": false 00:04:48.046 }, 00:04:48.046 { 00:04:48.046 "aliases": [ 00:04:48.046 "e3221d4e-12f3-569b-96ff-76dc310fb630" 00:04:48.046 ], 00:04:48.046 "assigned_rate_limits": { 00:04:48.046 "r_mbytes_per_sec": 0, 00:04:48.046 "rw_ios_per_sec": 0, 00:04:48.046 "rw_mbytes_per_sec": 0, 00:04:48.046 "w_mbytes_per_sec": 0 00:04:48.046 }, 00:04:48.046 "block_size": 512, 00:04:48.046 "claimed": false, 00:04:48.046 "driver_specific": { 00:04:48.046 "passthru": { 00:04:48.046 "base_bdev_name": "Malloc0", 00:04:48.046 "name": "Passthru0" 00:04:48.046 } 00:04:48.046 }, 00:04:48.046 "memory_domains": [ 00:04:48.046 { 00:04:48.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.046 "dma_device_type": 2 00:04:48.047 } 00:04:48.047 ], 00:04:48.047 "name": "Passthru0", 00:04:48.047 "num_blocks": 16384, 00:04:48.047 "product_name": "passthru", 00:04:48.047 "supported_io_types": { 00:04:48.047 "abort": true, 00:04:48.047 "compare": false, 00:04:48.047 "compare_and_write": false, 00:04:48.047 "flush": true, 00:04:48.047 "nvme_admin": false, 00:04:48.047 "nvme_io": false, 00:04:48.047 "read": true, 00:04:48.047 "reset": true, 00:04:48.047 "unmap": true, 00:04:48.047 "write": true, 00:04:48.047 "write_zeroes": true 00:04:48.047 }, 00:04:48.047 "uuid": "e3221d4e-12f3-569b-96ff-76dc310fb630", 00:04:48.047 "zoned": false 00:04:48.047 } 00:04:48.047 ]' 00:04:48.047 06:35:01 -- rpc/rpc.sh@21 -- # jq length 00:04:48.047 06:35:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.047 06:35:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.047 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.047 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.047 06:35:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:48.047 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.047 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.047 06:35:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.047 06:35:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.047 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 06:35:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.047 06:35:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.047 06:35:01 -- rpc/rpc.sh@26 -- # jq length 00:04:48.047 06:35:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.047 00:04:48.047 real 0m0.328s 00:04:48.047 user 0m0.211s 00:04:48.047 sys 0m0.042s 00:04:48.047 06:35:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.047 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 ************************************ 00:04:48.047 END TEST rpc_integrity 00:04:48.047 ************************************ 00:04:48.047 06:35:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:48.047 06:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.047 06:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.047 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 ************************************ 00:04:48.047 START TEST rpc_plugins 00:04:48.047 ************************************ 00:04:48.047 06:35:02 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:48.047 06:35:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:48.047 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.047 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.047 06:35:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:48.047 06:35:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:48.047 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.047 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.306 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.306 06:35:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:48.306 { 00:04:48.306 "aliases": [ 00:04:48.306 "1fe710e7-6ccf-4183-aa19-32769b42190c" 00:04:48.306 ], 00:04:48.306 "assigned_rate_limits": { 00:04:48.306 "r_mbytes_per_sec": 0, 00:04:48.306 "rw_ios_per_sec": 0, 00:04:48.306 "rw_mbytes_per_sec": 0, 00:04:48.306 "w_mbytes_per_sec": 0 00:04:48.306 }, 00:04:48.306 "block_size": 4096, 00:04:48.306 "claimed": false, 00:04:48.306 "driver_specific": {}, 00:04:48.306 "memory_domains": [ 00:04:48.306 { 00:04:48.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.306 "dma_device_type": 2 00:04:48.306 } 00:04:48.306 ], 00:04:48.306 "name": "Malloc1", 00:04:48.306 "num_blocks": 256, 00:04:48.306 "product_name": "Malloc disk", 00:04:48.306 "supported_io_types": { 00:04:48.306 "abort": true, 00:04:48.306 "compare": false, 00:04:48.306 "compare_and_write": false, 00:04:48.306 "flush": true, 00:04:48.306 "nvme_admin": false, 00:04:48.306 "nvme_io": false, 00:04:48.306 "read": true, 00:04:48.306 "reset": true, 00:04:48.306 "unmap": true, 00:04:48.306 "write": true, 00:04:48.306 "write_zeroes": true 00:04:48.306 }, 00:04:48.306 "uuid": "1fe710e7-6ccf-4183-aa19-32769b42190c", 00:04:48.306 "zoned": false 00:04:48.306 } 00:04:48.306 ]' 00:04:48.306 06:35:02 -- rpc/rpc.sh@32 -- # jq length 00:04:48.306 06:35:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:48.306 06:35:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:48.306 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.306 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.306 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.306 06:35:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:48.306 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.306 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.306 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.306 06:35:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:48.306 06:35:02 -- rpc/rpc.sh@36 -- # jq length 00:04:48.306 ************************************ 00:04:48.306 END TEST rpc_plugins 00:04:48.306 ************************************ 00:04:48.306 06:35:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:48.306 00:04:48.306 real 0m0.165s 00:04:48.306 user 0m0.108s 00:04:48.306 sys 0m0.019s 00:04:48.306 06:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.306 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.306 06:35:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:48.306 06:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.306 06:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.306 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.306 ************************************ 00:04:48.306 START TEST rpc_trace_cmd_test 00:04:48.306 ************************************ 00:04:48.307 06:35:02 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:48.307 06:35:02 -- rpc/rpc.sh@40 -- # local info 00:04:48.307 06:35:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:48.307 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.307 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.307 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.307 06:35:02 -- rpc/rpc.sh@42 -- # info='{ 00:04:48.307 "bdev": { 00:04:48.307 "mask": "0x8", 00:04:48.307 "tpoint_mask": "0xffffffffffffffff" 00:04:48.307 }, 00:04:48.307 "bdev_nvme": { 00:04:48.307 "mask": "0x4000", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "blobfs": { 00:04:48.307 "mask": "0x80", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "dsa": { 00:04:48.307 "mask": "0x200", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "ftl": { 00:04:48.307 "mask": "0x40", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "iaa": { 00:04:48.307 "mask": "0x1000", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "iscsi_conn": { 00:04:48.307 "mask": "0x2", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "nvme_pcie": { 00:04:48.307 "mask": "0x800", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "nvme_tcp": { 00:04:48.307 "mask": "0x2000", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "nvmf_rdma": { 00:04:48.307 "mask": "0x10", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "nvmf_tcp": { 00:04:48.307 "mask": "0x20", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "scsi": { 00:04:48.307 "mask": "0x4", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "thread": { 00:04:48.307 "mask": "0x400", 00:04:48.307 "tpoint_mask": "0x0" 00:04:48.307 }, 00:04:48.307 "tpoint_group_mask": "0x8", 00:04:48.307 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55532" 00:04:48.307 }' 00:04:48.307 06:35:02 -- rpc/rpc.sh@43 -- # jq length 00:04:48.566 06:35:02 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:48.566 06:35:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:48.566 06:35:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:48.566 06:35:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.566 06:35:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.566 06:35:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:48.566 06:35:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:48.566 06:35:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:48.566 ************************************ 00:04:48.567 END TEST rpc_trace_cmd_test 00:04:48.567 ************************************ 00:04:48.567 06:35:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:48.567 00:04:48.567 real 0m0.277s 00:04:48.567 user 0m0.244s 00:04:48.567 sys 0m0.022s 00:04:48.567 06:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.567 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.567 06:35:02 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:48.567 06:35:02 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:48.567 06:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.567 06:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.567 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.826 ************************************ 00:04:48.826 START TEST go_rpc 00:04:48.826 ************************************ 00:04:48.826 06:35:02 -- common/autotest_common.sh@1114 -- # go_rpc 00:04:48.826 06:35:02 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:48.826 06:35:02 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:48.826 06:35:02 -- rpc/rpc.sh@52 -- # jq length 00:04:48.826 06:35:02 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:48.826 06:35:02 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.826 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.826 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.826 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.826 06:35:02 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:48.826 06:35:02 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:48.826 06:35:02 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["378a530e-7f2b-4302-9859-8603f25012c4"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"378a530e-7f2b-4302-9859-8603f25012c4","zoned":false}]' 00:04:48.826 06:35:02 -- rpc/rpc.sh@57 -- # jq length 00:04:48.826 06:35:02 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:48.826 06:35:02 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:48.826 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.826 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.826 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.826 06:35:02 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:48.826 06:35:02 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:48.826 06:35:02 -- rpc/rpc.sh@61 -- # jq length 00:04:48.826 ************************************ 00:04:48.826 END TEST go_rpc 00:04:48.826 ************************************ 00:04:48.826 06:35:02 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:48.826 00:04:48.826 real 0m0.223s 00:04:48.826 user 0m0.147s 00:04:48.826 sys 0m0.041s 00:04:48.826 06:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.826 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.085 06:35:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.085 06:35:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.085 06:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.085 06:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.085 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.085 ************************************ 00:04:49.085 START TEST rpc_daemon_integrity 00:04:49.085 ************************************ 00:04:49.085 06:35:02 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:49.085 06:35:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.085 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.085 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.085 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.085 06:35:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.085 06:35:02 -- rpc/rpc.sh@13 -- # jq length 00:04:49.085 06:35:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.085 06:35:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.085 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.085 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.085 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.085 06:35:02 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:49.085 06:35:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.085 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.085 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.085 06:35:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.085 06:35:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.085 { 00:04:49.085 "aliases": [ 00:04:49.085 "e6ac6956-cae3-426c-b824-ad6cae62b6df" 00:04:49.085 ], 00:04:49.085 "assigned_rate_limits": { 00:04:49.085 "r_mbytes_per_sec": 0, 00:04:49.085 "rw_ios_per_sec": 0, 00:04:49.085 "rw_mbytes_per_sec": 0, 00:04:49.085 "w_mbytes_per_sec": 0 00:04:49.085 }, 00:04:49.085 "block_size": 512, 00:04:49.085 "claimed": false, 00:04:49.085 "driver_specific": {}, 00:04:49.085 "memory_domains": [ 00:04:49.085 { 00:04:49.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.085 "dma_device_type": 2 00:04:49.086 } 00:04:49.086 ], 00:04:49.086 "name": "Malloc3", 00:04:49.086 "num_blocks": 16384, 00:04:49.086 "product_name": "Malloc disk", 00:04:49.086 "supported_io_types": { 00:04:49.086 "abort": true, 00:04:49.086 "compare": false, 00:04:49.086 "compare_and_write": false, 00:04:49.086 "flush": true, 00:04:49.086 "nvme_admin": false, 00:04:49.086 "nvme_io": false, 00:04:49.086 "read": true, 00:04:49.086 "reset": true, 00:04:49.086 "unmap": true, 00:04:49.086 "write": true, 00:04:49.086 "write_zeroes": true 00:04:49.086 }, 00:04:49.086 "uuid": "e6ac6956-cae3-426c-b824-ad6cae62b6df", 00:04:49.086 "zoned": false 00:04:49.086 } 00:04:49.086 ]' 00:04:49.086 06:35:02 -- rpc/rpc.sh@17 -- # jq length 00:04:49.086 06:35:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.086 06:35:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:49.086 06:35:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.086 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.086 [2024-12-14 06:35:02.998493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:49.086 [2024-12-14 06:35:02.998716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.086 [2024-12-14 06:35:02.998744] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22d3680 00:04:49.086 [2024-12-14 06:35:02.998756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.086 [2024-12-14 06:35:03.000115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.086 [2024-12-14 06:35:03.000140] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.086 Passthru0 00:04:49.086 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.086 06:35:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.086 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.086 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.086 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.086 06:35:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.086 { 00:04:49.086 "aliases": [ 00:04:49.086 "e6ac6956-cae3-426c-b824-ad6cae62b6df" 00:04:49.086 ], 00:04:49.086 "assigned_rate_limits": { 00:04:49.086 "r_mbytes_per_sec": 0, 00:04:49.086 "rw_ios_per_sec": 0, 00:04:49.086 "rw_mbytes_per_sec": 0, 00:04:49.086 "w_mbytes_per_sec": 0 00:04:49.086 }, 00:04:49.086 "block_size": 512, 00:04:49.086 "claim_type": "exclusive_write", 00:04:49.086 "claimed": true, 00:04:49.086 "driver_specific": {}, 00:04:49.086 "memory_domains": [ 00:04:49.086 { 00:04:49.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.086 "dma_device_type": 2 00:04:49.086 } 00:04:49.086 ], 00:04:49.086 "name": "Malloc3", 00:04:49.086 "num_blocks": 16384, 00:04:49.086 "product_name": "Malloc disk", 00:04:49.086 "supported_io_types": { 00:04:49.086 "abort": true, 00:04:49.086 "compare": false, 00:04:49.086 "compare_and_write": false, 00:04:49.086 "flush": true, 00:04:49.086 "nvme_admin": false, 00:04:49.086 "nvme_io": false, 00:04:49.086 "read": true, 00:04:49.086 "reset": true, 00:04:49.086 "unmap": true, 00:04:49.086 "write": true, 00:04:49.086 "write_zeroes": true 00:04:49.086 }, 00:04:49.086 "uuid": "e6ac6956-cae3-426c-b824-ad6cae62b6df", 00:04:49.086 "zoned": false 00:04:49.086 }, 00:04:49.086 { 00:04:49.086 "aliases": [ 00:04:49.086 "ca8e5b2c-e176-5eaf-9612-fd881cfd6731" 00:04:49.086 ], 00:04:49.086 "assigned_rate_limits": { 00:04:49.086 "r_mbytes_per_sec": 0, 00:04:49.086 "rw_ios_per_sec": 0, 00:04:49.086 "rw_mbytes_per_sec": 0, 00:04:49.086 "w_mbytes_per_sec": 0 00:04:49.086 }, 00:04:49.086 "block_size": 512, 00:04:49.086 "claimed": false, 00:04:49.086 "driver_specific": { 00:04:49.086 "passthru": { 00:04:49.086 "base_bdev_name": "Malloc3", 00:04:49.086 "name": "Passthru0" 00:04:49.086 } 00:04:49.086 }, 00:04:49.086 "memory_domains": [ 00:04:49.086 { 00:04:49.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.086 "dma_device_type": 2 00:04:49.086 } 00:04:49.086 ], 00:04:49.086 "name": "Passthru0", 00:04:49.086 "num_blocks": 16384, 00:04:49.086 "product_name": "passthru", 00:04:49.086 "supported_io_types": { 00:04:49.086 "abort": true, 00:04:49.086 "compare": false, 00:04:49.086 "compare_and_write": false, 00:04:49.086 "flush": true, 00:04:49.086 "nvme_admin": false, 00:04:49.086 "nvme_io": false, 00:04:49.086 "read": true, 00:04:49.086 "reset": true, 00:04:49.086 "unmap": true, 00:04:49.086 "write": true, 00:04:49.086 "write_zeroes": true 00:04:49.086 }, 00:04:49.086 "uuid": "ca8e5b2c-e176-5eaf-9612-fd881cfd6731", 00:04:49.086 "zoned": false 00:04:49.086 } 00:04:49.086 ]' 00:04:49.086 06:35:03 -- rpc/rpc.sh@21 -- # jq length 00:04:49.345 06:35:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.345 06:35:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.345 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.345 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.345 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.345 06:35:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:49.345 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.345 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.345 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.345 06:35:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.345 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.345 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.345 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.345 06:35:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.345 06:35:03 -- rpc/rpc.sh@26 -- # jq length 00:04:49.345 ************************************ 00:04:49.345 END TEST rpc_daemon_integrity 00:04:49.345 ************************************ 00:04:49.345 06:35:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.345 00:04:49.345 real 0m0.327s 00:04:49.345 user 0m0.215s 00:04:49.345 sys 0m0.037s 00:04:49.345 06:35:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.345 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.345 06:35:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:49.345 06:35:03 -- rpc/rpc.sh@84 -- # killprocess 55532 00:04:49.345 06:35:03 -- common/autotest_common.sh@936 -- # '[' -z 55532 ']' 00:04:49.345 06:35:03 -- common/autotest_common.sh@940 -- # kill -0 55532 00:04:49.345 06:35:03 -- common/autotest_common.sh@941 -- # uname 00:04:49.345 06:35:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.345 06:35:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55532 00:04:49.345 killing process with pid 55532 00:04:49.345 06:35:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.345 06:35:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.345 06:35:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55532' 00:04:49.346 06:35:03 -- common/autotest_common.sh@955 -- # kill 55532 00:04:49.346 06:35:03 -- common/autotest_common.sh@960 -- # wait 55532 00:04:49.913 00:04:49.913 real 0m3.479s 00:04:49.913 user 0m4.412s 00:04:49.913 sys 0m0.885s 00:04:49.913 06:35:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.913 ************************************ 00:04:49.913 END TEST rpc 00:04:49.913 ************************************ 00:04:49.913 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.913 06:35:03 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.913 06:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.913 06:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.913 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.913 ************************************ 00:04:49.913 START TEST rpc_client 00:04:49.913 ************************************ 00:04:49.913 06:35:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.172 * Looking for test storage... 00:04:50.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:50.173 06:35:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.173 06:35:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.173 06:35:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.173 06:35:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.173 06:35:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.173 06:35:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.173 06:35:04 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.173 06:35:04 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.173 06:35:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.173 06:35:04 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.173 06:35:04 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.173 06:35:04 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.173 06:35:04 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.173 06:35:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.173 06:35:04 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.173 06:35:04 -- scripts/common.sh@344 -- # : 1 00:04:50.173 06:35:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.173 06:35:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.173 06:35:04 -- scripts/common.sh@364 -- # decimal 1 00:04:50.173 06:35:04 -- scripts/common.sh@352 -- # local d=1 00:04:50.173 06:35:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.173 06:35:04 -- scripts/common.sh@354 -- # echo 1 00:04:50.173 06:35:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.173 06:35:04 -- scripts/common.sh@365 -- # decimal 2 00:04:50.173 06:35:04 -- scripts/common.sh@352 -- # local d=2 00:04:50.173 06:35:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.173 06:35:04 -- scripts/common.sh@354 -- # echo 2 00:04:50.173 06:35:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.173 06:35:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.173 06:35:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.173 06:35:04 -- scripts/common.sh@367 -- # return 0 00:04:50.173 06:35:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.173 --rc genhtml_branch_coverage=1 00:04:50.173 --rc genhtml_function_coverage=1 00:04:50.173 --rc genhtml_legend=1 00:04:50.173 --rc geninfo_all_blocks=1 00:04:50.173 --rc geninfo_unexecuted_blocks=1 00:04:50.173 00:04:50.173 ' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.173 --rc genhtml_branch_coverage=1 00:04:50.173 --rc genhtml_function_coverage=1 00:04:50.173 --rc genhtml_legend=1 00:04:50.173 --rc geninfo_all_blocks=1 00:04:50.173 --rc geninfo_unexecuted_blocks=1 00:04:50.173 00:04:50.173 ' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.173 --rc genhtml_branch_coverage=1 00:04:50.173 --rc genhtml_function_coverage=1 00:04:50.173 --rc genhtml_legend=1 00:04:50.173 --rc geninfo_all_blocks=1 00:04:50.173 --rc geninfo_unexecuted_blocks=1 00:04:50.173 00:04:50.173 ' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.173 --rc genhtml_branch_coverage=1 00:04:50.173 --rc genhtml_function_coverage=1 00:04:50.173 --rc genhtml_legend=1 00:04:50.173 --rc geninfo_all_blocks=1 00:04:50.173 --rc geninfo_unexecuted_blocks=1 00:04:50.173 00:04:50.173 ' 00:04:50.173 06:35:04 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.173 OK 00:04:50.173 06:35:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.173 00:04:50.173 real 0m0.220s 00:04:50.173 user 0m0.138s 00:04:50.173 sys 0m0.090s 00:04:50.173 06:35:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.173 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.173 ************************************ 00:04:50.173 END TEST rpc_client 00:04:50.173 ************************************ 00:04:50.173 06:35:04 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.173 06:35:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.173 06:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.173 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.173 ************************************ 00:04:50.173 START TEST json_config 00:04:50.173 ************************************ 00:04:50.173 06:35:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.433 06:35:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.433 06:35:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.433 06:35:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.433 06:35:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.433 06:35:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.433 06:35:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.433 06:35:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.433 06:35:04 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.433 06:35:04 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.433 06:35:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.433 06:35:04 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.433 06:35:04 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.433 06:35:04 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.433 06:35:04 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.433 06:35:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.433 06:35:04 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.433 06:35:04 -- scripts/common.sh@344 -- # : 1 00:04:50.433 06:35:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.433 06:35:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.433 06:35:04 -- scripts/common.sh@364 -- # decimal 1 00:04:50.433 06:35:04 -- scripts/common.sh@352 -- # local d=1 00:04:50.433 06:35:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.433 06:35:04 -- scripts/common.sh@354 -- # echo 1 00:04:50.433 06:35:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.433 06:35:04 -- scripts/common.sh@365 -- # decimal 2 00:04:50.433 06:35:04 -- scripts/common.sh@352 -- # local d=2 00:04:50.433 06:35:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.433 06:35:04 -- scripts/common.sh@354 -- # echo 2 00:04:50.433 06:35:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.433 06:35:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.433 06:35:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.433 06:35:04 -- scripts/common.sh@367 -- # return 0 00:04:50.433 06:35:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.433 06:35:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.433 --rc genhtml_branch_coverage=1 00:04:50.433 --rc genhtml_function_coverage=1 00:04:50.433 --rc genhtml_legend=1 00:04:50.433 --rc geninfo_all_blocks=1 00:04:50.433 --rc geninfo_unexecuted_blocks=1 00:04:50.433 00:04:50.433 ' 00:04:50.433 06:35:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.433 --rc genhtml_branch_coverage=1 00:04:50.433 --rc genhtml_function_coverage=1 00:04:50.433 --rc genhtml_legend=1 00:04:50.433 --rc geninfo_all_blocks=1 00:04:50.433 --rc geninfo_unexecuted_blocks=1 00:04:50.433 00:04:50.433 ' 00:04:50.433 06:35:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.433 --rc genhtml_branch_coverage=1 00:04:50.433 --rc genhtml_function_coverage=1 00:04:50.433 --rc genhtml_legend=1 00:04:50.433 --rc geninfo_all_blocks=1 00:04:50.433 --rc geninfo_unexecuted_blocks=1 00:04:50.433 00:04:50.433 ' 00:04:50.433 06:35:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.433 --rc genhtml_branch_coverage=1 00:04:50.433 --rc genhtml_function_coverage=1 00:04:50.433 --rc genhtml_legend=1 00:04:50.433 --rc geninfo_all_blocks=1 00:04:50.433 --rc geninfo_unexecuted_blocks=1 00:04:50.433 00:04:50.433 ' 00:04:50.433 06:35:04 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.433 06:35:04 -- nvmf/common.sh@7 -- # uname -s 00:04:50.433 06:35:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.433 06:35:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.433 06:35:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.433 06:35:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.433 06:35:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.433 06:35:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.433 06:35:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.433 06:35:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.433 06:35:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.433 06:35:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.433 06:35:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:04:50.433 06:35:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:04:50.433 06:35:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.433 06:35:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.433 06:35:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.433 06:35:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.433 06:35:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.433 06:35:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.433 06:35:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.433 06:35:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.433 06:35:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.433 06:35:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.433 06:35:04 -- paths/export.sh@5 -- # export PATH 00:04:50.433 06:35:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.433 06:35:04 -- nvmf/common.sh@46 -- # : 0 00:04:50.433 06:35:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.433 06:35:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.433 06:35:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.433 06:35:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.433 06:35:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.433 06:35:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.433 06:35:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.433 06:35:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.433 06:35:04 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:50.433 06:35:04 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:50.433 06:35:04 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:50.433 06:35:04 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.433 06:35:04 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:50.433 06:35:04 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:50.433 06:35:04 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:50.434 06:35:04 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:50.434 06:35:04 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:50.434 06:35:04 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:50.434 06:35:04 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:50.434 INFO: JSON configuration test init 00:04:50.434 06:35:04 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:50.434 06:35:04 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:50.434 06:35:04 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.434 06:35:04 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:50.434 06:35:04 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:50.434 06:35:04 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:50.434 06:35:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.434 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.434 06:35:04 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:50.434 06:35:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.434 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.434 06:35:04 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:50.434 06:35:04 -- json_config/json_config.sh@98 -- # local app=target 00:04:50.434 06:35:04 -- json_config/json_config.sh@99 -- # shift 00:04:50.434 06:35:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:50.434 06:35:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:50.434 06:35:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:50.434 06:35:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:50.434 06:35:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:50.434 06:35:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=55853 00:04:50.434 06:35:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:50.434 06:35:04 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:50.434 Waiting for target to run... 00:04:50.434 06:35:04 -- json_config/json_config.sh@114 -- # waitforlisten 55853 /var/tmp/spdk_tgt.sock 00:04:50.434 06:35:04 -- common/autotest_common.sh@829 -- # '[' -z 55853 ']' 00:04:50.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.434 06:35:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.434 06:35:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.434 06:35:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.434 06:35:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.434 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.693 [2024-12-14 06:35:04.444149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.693 [2024-12-14 06:35:04.444291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55853 ] 00:04:51.262 [2024-12-14 06:35:04.988256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.262 [2024-12-14 06:35:05.086286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.262 [2024-12-14 06:35:05.086509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.522 00:04:51.522 06:35:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.522 06:35:05 -- common/autotest_common.sh@862 -- # return 0 00:04:51.522 06:35:05 -- json_config/json_config.sh@115 -- # echo '' 00:04:51.522 06:35:05 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:51.522 06:35:05 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:51.522 06:35:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.522 06:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.522 06:35:05 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:51.522 06:35:05 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:51.522 06:35:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.522 06:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.781 06:35:05 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:51.781 06:35:05 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:51.781 06:35:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.041 06:35:06 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:52.041 06:35:06 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:52.041 06:35:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.041 06:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.041 06:35:06 -- json_config/json_config.sh@48 -- # local ret=0 00:04:52.041 06:35:06 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.041 06:35:06 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:52.041 06:35:06 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:52.041 06:35:06 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:52.041 06:35:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.610 06:35:06 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:52.610 06:35:06 -- json_config/json_config.sh@51 -- # local get_types 00:04:52.610 06:35:06 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:52.610 06:35:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.610 06:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.610 06:35:06 -- json_config/json_config.sh@58 -- # return 0 00:04:52.610 06:35:06 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:52.610 06:35:06 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:52.610 06:35:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.610 06:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.610 06:35:06 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.610 06:35:06 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:52.610 06:35:06 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.610 06:35:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.870 MallocForNvmf0 00:04:52.870 06:35:06 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.870 06:35:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.142 MallocForNvmf1 00:04:53.142 06:35:06 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.142 06:35:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.142 [2024-12-14 06:35:07.120092] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.414 06:35:07 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.414 06:35:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.414 06:35:07 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.414 06:35:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.674 06:35:07 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.674 06:35:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.933 06:35:07 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.933 06:35:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.191 [2024-12-14 06:35:08.096752] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:54.191 06:35:08 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:54.191 06:35:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.191 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.191 06:35:08 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:54.191 06:35:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.191 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 06:35:08 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:54.449 06:35:08 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.449 06:35:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.449 MallocBdevForConfigChangeCheck 00:04:54.709 06:35:08 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:54.709 06:35:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.709 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.709 06:35:08 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:54.709 06:35:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.967 INFO: shutting down applications... 00:04:54.967 06:35:08 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:54.967 06:35:08 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:54.967 06:35:08 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:54.967 06:35:08 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:54.967 06:35:08 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:55.534 Calling clear_iscsi_subsystem 00:04:55.534 Calling clear_nvmf_subsystem 00:04:55.534 Calling clear_nbd_subsystem 00:04:55.534 Calling clear_ublk_subsystem 00:04:55.534 Calling clear_vhost_blk_subsystem 00:04:55.534 Calling clear_vhost_scsi_subsystem 00:04:55.534 Calling clear_scheduler_subsystem 00:04:55.534 Calling clear_bdev_subsystem 00:04:55.534 Calling clear_accel_subsystem 00:04:55.534 Calling clear_vmd_subsystem 00:04:55.534 Calling clear_sock_subsystem 00:04:55.534 Calling clear_iobuf_subsystem 00:04:55.534 06:35:09 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:55.534 06:35:09 -- json_config/json_config.sh@396 -- # count=100 00:04:55.534 06:35:09 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:55.534 06:35:09 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.535 06:35:09 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:55.535 06:35:09 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:55.794 06:35:09 -- json_config/json_config.sh@398 -- # break 00:04:55.794 06:35:09 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:55.794 06:35:09 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:55.794 06:35:09 -- json_config/json_config.sh@120 -- # local app=target 00:04:55.794 06:35:09 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:55.794 06:35:09 -- json_config/json_config.sh@124 -- # [[ -n 55853 ]] 00:04:55.794 06:35:09 -- json_config/json_config.sh@127 -- # kill -SIGINT 55853 00:04:55.794 06:35:09 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:55.794 06:35:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:55.794 06:35:09 -- json_config/json_config.sh@130 -- # kill -0 55853 00:04:55.794 06:35:09 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:56.362 06:35:10 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:56.362 06:35:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:56.362 06:35:10 -- json_config/json_config.sh@130 -- # kill -0 55853 00:04:56.362 06:35:10 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:56.362 06:35:10 -- json_config/json_config.sh@132 -- # break 00:04:56.363 SPDK target shutdown done 00:04:56.363 06:35:10 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:56.363 06:35:10 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:56.363 INFO: relaunching applications... 00:04:56.363 06:35:10 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:56.363 06:35:10 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.363 06:35:10 -- json_config/json_config.sh@98 -- # local app=target 00:04:56.363 06:35:10 -- json_config/json_config.sh@99 -- # shift 00:04:56.363 06:35:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:56.363 06:35:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:56.363 06:35:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:56.363 06:35:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:56.363 06:35:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:56.363 06:35:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=56128 00:04:56.363 Waiting for target to run... 00:04:56.363 06:35:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:56.363 06:35:10 -- json_config/json_config.sh@114 -- # waitforlisten 56128 /var/tmp/spdk_tgt.sock 00:04:56.363 06:35:10 -- common/autotest_common.sh@829 -- # '[' -z 56128 ']' 00:04:56.363 06:35:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.363 06:35:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.363 06:35:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.363 06:35:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.363 06:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.363 06:35:10 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.363 [2024-12-14 06:35:10.184759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.363 [2024-12-14 06:35:10.184899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56128 ] 00:04:56.622 [2024-12-14 06:35:10.610446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.881 [2024-12-14 06:35:10.713202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.881 [2024-12-14 06:35:10.713439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.141 [2024-12-14 06:35:11.024383] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.141 [2024-12-14 06:35:11.056480] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.141 06:35:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.141 06:35:11 -- common/autotest_common.sh@862 -- # return 0 00:04:57.141 00:04:57.141 06:35:11 -- json_config/json_config.sh@115 -- # echo '' 00:04:57.141 06:35:11 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:57.141 INFO: Checking if target configuration is the same... 00:04:57.141 06:35:11 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:57.141 06:35:11 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:57.141 06:35:11 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.141 06:35:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.141 + '[' 2 -ne 2 ']' 00:04:57.141 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:57.141 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:57.141 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.141 +++ basename /dev/fd/62 00:04:57.141 ++ mktemp /tmp/62.XXX 00:04:57.141 + tmp_file_1=/tmp/62.1XT 00:04:57.398 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.398 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.398 + tmp_file_2=/tmp/spdk_tgt_config.json.gv0 00:04:57.398 + ret=0 00:04:57.398 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.657 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.657 + diff -u /tmp/62.1XT /tmp/spdk_tgt_config.json.gv0 00:04:57.657 INFO: JSON config files are the same 00:04:57.657 + echo 'INFO: JSON config files are the same' 00:04:57.657 + rm /tmp/62.1XT /tmp/spdk_tgt_config.json.gv0 00:04:57.657 + exit 0 00:04:57.657 06:35:11 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:57.657 INFO: changing configuration and checking if this can be detected... 00:04:57.657 06:35:11 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:57.658 06:35:11 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.658 06:35:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.916 06:35:11 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.917 06:35:11 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:57.917 06:35:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.917 + '[' 2 -ne 2 ']' 00:04:57.917 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:57.917 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:57.917 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.917 +++ basename /dev/fd/62 00:04:57.917 ++ mktemp /tmp/62.XXX 00:04:57.917 + tmp_file_1=/tmp/62.nQb 00:04:57.917 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.917 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.917 + tmp_file_2=/tmp/spdk_tgt_config.json.DBM 00:04:57.917 + ret=0 00:04:57.917 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.485 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.485 + diff -u /tmp/62.nQb /tmp/spdk_tgt_config.json.DBM 00:04:58.485 + ret=1 00:04:58.485 + echo '=== Start of file: /tmp/62.nQb ===' 00:04:58.485 + cat /tmp/62.nQb 00:04:58.485 + echo '=== End of file: /tmp/62.nQb ===' 00:04:58.485 + echo '' 00:04:58.485 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DBM ===' 00:04:58.485 + cat /tmp/spdk_tgt_config.json.DBM 00:04:58.485 + echo '=== End of file: /tmp/spdk_tgt_config.json.DBM ===' 00:04:58.485 + echo '' 00:04:58.485 + rm /tmp/62.nQb /tmp/spdk_tgt_config.json.DBM 00:04:58.485 + exit 1 00:04:58.485 INFO: configuration change detected. 00:04:58.485 06:35:12 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:58.485 06:35:12 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:58.485 06:35:12 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:58.485 06:35:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.485 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.485 06:35:12 -- json_config/json_config.sh@360 -- # local ret=0 00:04:58.485 06:35:12 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:58.485 06:35:12 -- json_config/json_config.sh@370 -- # [[ -n 56128 ]] 00:04:58.485 06:35:12 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:58.485 06:35:12 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.485 06:35:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.485 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.485 06:35:12 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:58.485 06:35:12 -- json_config/json_config.sh@246 -- # uname -s 00:04:58.485 06:35:12 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:58.485 06:35:12 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:58.485 06:35:12 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:58.485 06:35:12 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.485 06:35:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.485 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.485 06:35:12 -- json_config/json_config.sh@376 -- # killprocess 56128 00:04:58.485 06:35:12 -- common/autotest_common.sh@936 -- # '[' -z 56128 ']' 00:04:58.485 06:35:12 -- common/autotest_common.sh@940 -- # kill -0 56128 00:04:58.485 06:35:12 -- common/autotest_common.sh@941 -- # uname 00:04:58.485 06:35:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.485 06:35:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56128 00:04:58.485 06:35:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.485 06:35:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.485 killing process with pid 56128 00:04:58.485 06:35:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56128' 00:04:58.485 06:35:12 -- common/autotest_common.sh@955 -- # kill 56128 00:04:58.485 06:35:12 -- common/autotest_common.sh@960 -- # wait 56128 00:04:59.053 06:35:12 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:59.053 06:35:12 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:59.053 06:35:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.053 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 06:35:12 -- json_config/json_config.sh@381 -- # return 0 00:04:59.053 06:35:12 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:59.053 INFO: Success 00:04:59.053 ************************************ 00:04:59.053 END TEST json_config 00:04:59.053 ************************************ 00:04:59.053 00:04:59.053 real 0m8.678s 00:04:59.053 user 0m12.308s 00:04:59.053 sys 0m2.023s 00:04:59.053 06:35:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.053 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 06:35:12 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.053 06:35:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.053 06:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.053 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 ************************************ 00:04:59.053 START TEST json_config_extra_key 00:04:59.053 ************************************ 00:04:59.053 06:35:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.053 06:35:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.053 06:35:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.053 06:35:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.053 06:35:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.053 06:35:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.053 06:35:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.053 06:35:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.053 06:35:13 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.053 06:35:13 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.053 06:35:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.053 06:35:13 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.053 06:35:13 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.053 06:35:13 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.053 06:35:13 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.053 06:35:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.053 06:35:13 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.053 06:35:13 -- scripts/common.sh@344 -- # : 1 00:04:59.053 06:35:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.053 06:35:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.053 06:35:13 -- scripts/common.sh@364 -- # decimal 1 00:04:59.053 06:35:13 -- scripts/common.sh@352 -- # local d=1 00:04:59.053 06:35:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.053 06:35:13 -- scripts/common.sh@354 -- # echo 1 00:04:59.053 06:35:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.053 06:35:13 -- scripts/common.sh@365 -- # decimal 2 00:04:59.053 06:35:13 -- scripts/common.sh@352 -- # local d=2 00:04:59.053 06:35:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.053 06:35:13 -- scripts/common.sh@354 -- # echo 2 00:04:59.053 06:35:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.313 06:35:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.313 06:35:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.313 06:35:13 -- scripts/common.sh@367 -- # return 0 00:04:59.313 06:35:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.313 06:35:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.313 --rc genhtml_branch_coverage=1 00:04:59.313 --rc genhtml_function_coverage=1 00:04:59.313 --rc genhtml_legend=1 00:04:59.313 --rc geninfo_all_blocks=1 00:04:59.313 --rc geninfo_unexecuted_blocks=1 00:04:59.313 00:04:59.313 ' 00:04:59.313 06:35:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.313 --rc genhtml_branch_coverage=1 00:04:59.313 --rc genhtml_function_coverage=1 00:04:59.313 --rc genhtml_legend=1 00:04:59.313 --rc geninfo_all_blocks=1 00:04:59.313 --rc geninfo_unexecuted_blocks=1 00:04:59.313 00:04:59.313 ' 00:04:59.313 06:35:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.313 --rc genhtml_branch_coverage=1 00:04:59.313 --rc genhtml_function_coverage=1 00:04:59.313 --rc genhtml_legend=1 00:04:59.313 --rc geninfo_all_blocks=1 00:04:59.313 --rc geninfo_unexecuted_blocks=1 00:04:59.313 00:04:59.313 ' 00:04:59.313 06:35:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.313 --rc genhtml_branch_coverage=1 00:04:59.313 --rc genhtml_function_coverage=1 00:04:59.313 --rc genhtml_legend=1 00:04:59.313 --rc geninfo_all_blocks=1 00:04:59.313 --rc geninfo_unexecuted_blocks=1 00:04:59.313 00:04:59.313 ' 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.313 06:35:13 -- nvmf/common.sh@7 -- # uname -s 00:04:59.313 06:35:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.313 06:35:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.313 06:35:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.313 06:35:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.313 06:35:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.313 06:35:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.313 06:35:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.313 06:35:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.313 06:35:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.313 06:35:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.313 06:35:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:04:59.313 06:35:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:04:59.313 06:35:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.313 06:35:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.313 06:35:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.313 06:35:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.313 06:35:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.313 06:35:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.313 06:35:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.313 06:35:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 06:35:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 06:35:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 06:35:13 -- paths/export.sh@5 -- # export PATH 00:04:59.313 06:35:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 06:35:13 -- nvmf/common.sh@46 -- # : 0 00:04:59.313 06:35:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:59.313 06:35:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:59.313 06:35:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:59.313 06:35:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.313 06:35:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.313 06:35:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:59.313 06:35:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:59.313 06:35:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.313 06:35:13 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:59.314 INFO: launching applications... 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56311 00:04:59.314 Waiting for target to run... 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56311 /var/tmp/spdk_tgt.sock 00:04:59.314 06:35:13 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.314 06:35:13 -- common/autotest_common.sh@829 -- # '[' -z 56311 ']' 00:04:59.314 06:35:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.314 06:35:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.314 06:35:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.314 06:35:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.314 06:35:13 -- common/autotest_common.sh@10 -- # set +x 00:04:59.314 [2024-12-14 06:35:13.145356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.314 [2024-12-14 06:35:13.146147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56311 ] 00:04:59.903 [2024-12-14 06:35:13.685469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.903 [2024-12-14 06:35:13.793407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.903 [2024-12-14 06:35:13.793574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.190 06:35:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.190 00:05:00.190 06:35:14 -- common/autotest_common.sh@862 -- # return 0 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:00.190 INFO: shutting down applications... 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56311 ]] 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56311 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56311 00:05:00.190 06:35:14 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:00.758 06:35:14 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:00.758 06:35:14 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.758 06:35:14 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56311 00:05:00.758 06:35:14 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56311 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:01.325 SPDK target shutdown done 00:05:01.325 Success 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:01.325 06:35:15 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:01.325 00:05:01.325 real 0m2.259s 00:05:01.325 user 0m1.750s 00:05:01.325 sys 0m0.579s 00:05:01.325 ************************************ 00:05:01.325 END TEST json_config_extra_key 00:05:01.325 ************************************ 00:05:01.325 06:35:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.325 06:35:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.325 06:35:15 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.325 06:35:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.325 06:35:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.325 06:35:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.325 ************************************ 00:05:01.325 START TEST alias_rpc 00:05:01.325 ************************************ 00:05:01.325 06:35:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.325 * Looking for test storage... 00:05:01.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:01.325 06:35:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:01.325 06:35:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:01.325 06:35:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.584 06:35:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.584 06:35:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.584 06:35:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.584 06:35:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.584 06:35:15 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.584 06:35:15 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.584 06:35:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.584 06:35:15 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.584 06:35:15 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.584 06:35:15 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.584 06:35:15 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.584 06:35:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.584 06:35:15 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.584 06:35:15 -- scripts/common.sh@344 -- # : 1 00:05:01.584 06:35:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.584 06:35:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.584 06:35:15 -- scripts/common.sh@364 -- # decimal 1 00:05:01.584 06:35:15 -- scripts/common.sh@352 -- # local d=1 00:05:01.584 06:35:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.584 06:35:15 -- scripts/common.sh@354 -- # echo 1 00:05:01.584 06:35:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.584 06:35:15 -- scripts/common.sh@365 -- # decimal 2 00:05:01.584 06:35:15 -- scripts/common.sh@352 -- # local d=2 00:05:01.584 06:35:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.584 06:35:15 -- scripts/common.sh@354 -- # echo 2 00:05:01.584 06:35:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.584 06:35:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.584 06:35:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.584 06:35:15 -- scripts/common.sh@367 -- # return 0 00:05:01.584 06:35:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.584 06:35:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.584 --rc genhtml_branch_coverage=1 00:05:01.584 --rc genhtml_function_coverage=1 00:05:01.584 --rc genhtml_legend=1 00:05:01.585 --rc geninfo_all_blocks=1 00:05:01.585 --rc geninfo_unexecuted_blocks=1 00:05:01.585 00:05:01.585 ' 00:05:01.585 06:35:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.585 --rc genhtml_branch_coverage=1 00:05:01.585 --rc genhtml_function_coverage=1 00:05:01.585 --rc genhtml_legend=1 00:05:01.585 --rc geninfo_all_blocks=1 00:05:01.585 --rc geninfo_unexecuted_blocks=1 00:05:01.585 00:05:01.585 ' 00:05:01.585 06:35:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.585 --rc genhtml_branch_coverage=1 00:05:01.585 --rc genhtml_function_coverage=1 00:05:01.585 --rc genhtml_legend=1 00:05:01.585 --rc geninfo_all_blocks=1 00:05:01.585 --rc geninfo_unexecuted_blocks=1 00:05:01.585 00:05:01.585 ' 00:05:01.585 06:35:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.585 --rc genhtml_branch_coverage=1 00:05:01.585 --rc genhtml_function_coverage=1 00:05:01.585 --rc genhtml_legend=1 00:05:01.585 --rc geninfo_all_blocks=1 00:05:01.585 --rc geninfo_unexecuted_blocks=1 00:05:01.585 00:05:01.585 ' 00:05:01.585 06:35:15 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.585 06:35:15 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56401 00:05:01.585 06:35:15 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56401 00:05:01.585 06:35:15 -- common/autotest_common.sh@829 -- # '[' -z 56401 ']' 00:05:01.585 06:35:15 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.585 06:35:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.585 06:35:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.585 06:35:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.585 06:35:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.585 06:35:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.585 [2024-12-14 06:35:15.450813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.585 [2024-12-14 06:35:15.451331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56401 ] 00:05:01.844 [2024-12-14 06:35:15.582904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.844 [2024-12-14 06:35:15.717896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.844 [2024-12-14 06:35:15.718322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.781 06:35:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.781 06:35:16 -- common/autotest_common.sh@862 -- # return 0 00:05:02.781 06:35:16 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:02.781 06:35:16 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56401 00:05:02.781 06:35:16 -- common/autotest_common.sh@936 -- # '[' -z 56401 ']' 00:05:02.781 06:35:16 -- common/autotest_common.sh@940 -- # kill -0 56401 00:05:02.781 06:35:16 -- common/autotest_common.sh@941 -- # uname 00:05:02.781 06:35:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.781 06:35:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56401 00:05:03.040 killing process with pid 56401 00:05:03.040 06:35:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.040 06:35:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.040 06:35:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56401' 00:05:03.040 06:35:16 -- common/autotest_common.sh@955 -- # kill 56401 00:05:03.040 06:35:16 -- common/autotest_common.sh@960 -- # wait 56401 00:05:03.608 ************************************ 00:05:03.608 END TEST alias_rpc 00:05:03.608 ************************************ 00:05:03.608 00:05:03.608 real 0m2.149s 00:05:03.608 user 0m2.325s 00:05:03.608 sys 0m0.577s 00:05:03.608 06:35:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.608 06:35:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.608 06:35:17 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:03.608 06:35:17 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.608 06:35:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.608 06:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.608 06:35:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.608 ************************************ 00:05:03.608 START TEST dpdk_mem_utility 00:05:03.608 ************************************ 00:05:03.608 06:35:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.608 * Looking for test storage... 00:05:03.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.608 06:35:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.608 06:35:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.608 06:35:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.608 06:35:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.608 06:35:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.608 06:35:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.608 06:35:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.608 06:35:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.608 06:35:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.608 06:35:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.608 06:35:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.608 06:35:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.608 06:35:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.608 06:35:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.608 06:35:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.608 06:35:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.608 06:35:17 -- scripts/common.sh@344 -- # : 1 00:05:03.608 06:35:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.608 06:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.608 06:35:17 -- scripts/common.sh@364 -- # decimal 1 00:05:03.608 06:35:17 -- scripts/common.sh@352 -- # local d=1 00:05:03.608 06:35:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.608 06:35:17 -- scripts/common.sh@354 -- # echo 1 00:05:03.608 06:35:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.608 06:35:17 -- scripts/common.sh@365 -- # decimal 2 00:05:03.608 06:35:17 -- scripts/common.sh@352 -- # local d=2 00:05:03.608 06:35:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.608 06:35:17 -- scripts/common.sh@354 -- # echo 2 00:05:03.867 06:35:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.868 06:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.868 06:35:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.868 06:35:17 -- scripts/common.sh@367 -- # return 0 00:05:03.868 06:35:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.868 06:35:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.868 --rc genhtml_branch_coverage=1 00:05:03.868 --rc genhtml_function_coverage=1 00:05:03.868 --rc genhtml_legend=1 00:05:03.868 --rc geninfo_all_blocks=1 00:05:03.868 --rc geninfo_unexecuted_blocks=1 00:05:03.868 00:05:03.868 ' 00:05:03.868 06:35:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.868 --rc genhtml_branch_coverage=1 00:05:03.868 --rc genhtml_function_coverage=1 00:05:03.868 --rc genhtml_legend=1 00:05:03.868 --rc geninfo_all_blocks=1 00:05:03.868 --rc geninfo_unexecuted_blocks=1 00:05:03.868 00:05:03.868 ' 00:05:03.868 06:35:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.868 --rc genhtml_branch_coverage=1 00:05:03.868 --rc genhtml_function_coverage=1 00:05:03.868 --rc genhtml_legend=1 00:05:03.868 --rc geninfo_all_blocks=1 00:05:03.868 --rc geninfo_unexecuted_blocks=1 00:05:03.868 00:05:03.868 ' 00:05:03.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.868 06:35:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.868 --rc genhtml_branch_coverage=1 00:05:03.868 --rc genhtml_function_coverage=1 00:05:03.868 --rc genhtml_legend=1 00:05:03.868 --rc geninfo_all_blocks=1 00:05:03.868 --rc geninfo_unexecuted_blocks=1 00:05:03.868 00:05:03.868 ' 00:05:03.868 06:35:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.868 06:35:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56500 00:05:03.868 06:35:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56500 00:05:03.868 06:35:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.868 06:35:17 -- common/autotest_common.sh@829 -- # '[' -z 56500 ']' 00:05:03.868 06:35:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.868 06:35:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.868 06:35:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.868 06:35:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.868 06:35:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.868 [2024-12-14 06:35:17.674256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.868 [2024-12-14 06:35:17.675118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56500 ] 00:05:03.868 [2024-12-14 06:35:17.812348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.127 [2024-12-14 06:35:17.919299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.127 [2024-12-14 06:35:17.919763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.695 06:35:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.695 06:35:18 -- common/autotest_common.sh@862 -- # return 0 00:05:04.695 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.695 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.695 06:35:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.695 06:35:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.695 { 00:05:04.695 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.695 } 00:05:04.695 06:35:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.695 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.955 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:04.955 1 heaps totaling size 814.000000 MiB 00:05:04.955 size: 814.000000 MiB heap id: 0 00:05:04.955 end heaps---------- 00:05:04.955 8 mempools totaling size 598.116089 MiB 00:05:04.955 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.955 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.955 size: 84.521057 MiB name: bdev_io_56500 00:05:04.955 size: 51.011292 MiB name: evtpool_56500 00:05:04.955 size: 50.003479 MiB name: msgpool_56500 00:05:04.955 size: 21.763794 MiB name: PDU_Pool 00:05:04.955 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.955 size: 0.026123 MiB name: Session_Pool 00:05:04.955 end mempools------- 00:05:04.955 6 memzones totaling size 4.142822 MiB 00:05:04.955 size: 1.000366 MiB name: RG_ring_0_56500 00:05:04.955 size: 1.000366 MiB name: RG_ring_1_56500 00:05:04.955 size: 1.000366 MiB name: RG_ring_4_56500 00:05:04.955 size: 1.000366 MiB name: RG_ring_5_56500 00:05:04.955 size: 0.125366 MiB name: RG_ring_2_56500 00:05:04.955 size: 0.015991 MiB name: RG_ring_3_56500 00:05:04.955 end memzones------- 00:05:04.955 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.955 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:05:04.955 list of free elements. size: 12.487671 MiB 00:05:04.955 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.955 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:04.955 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:04.955 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:04.955 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:04.955 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:04.955 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:04.955 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:04.955 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:04.955 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:04.955 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:04.955 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:04.955 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:04.955 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:04.955 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:04.955 list of standard malloc elements. size: 199.249756 MiB 00:05:04.955 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:04.955 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:04.955 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:04.955 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:04.955 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:04.955 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.955 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:04.955 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.955 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:04.955 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.955 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.955 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:04.955 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:04.955 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:04.956 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:04.956 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:04.957 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:04.957 list of memzone associated elements. size: 602.262573 MiB 00:05:04.957 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:04.957 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.957 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:04.957 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.957 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:04.957 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56500_0 00:05:04.957 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.957 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56500_0 00:05:04.957 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.957 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56500_0 00:05:04.957 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:04.957 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.957 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:04.957 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.957 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.957 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56500 00:05:04.957 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.957 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56500 00:05:04.957 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.957 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56500 00:05:04.957 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:04.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.957 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:04.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.957 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:04.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.957 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:04.957 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.957 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.957 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56500 00:05:04.957 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.957 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56500 00:05:04.957 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:04.957 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56500 00:05:04.957 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:04.957 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56500 00:05:04.957 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:04.957 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56500 00:05:04.957 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:04.957 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.957 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:04.957 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.957 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:04.957 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.957 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:04.957 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56500 00:05:04.957 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:04.957 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.957 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:04.957 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.957 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:04.957 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56500 00:05:04.957 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:04.957 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.957 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:04.957 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56500 00:05:04.957 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:04.957 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56500 00:05:04.957 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:04.957 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.957 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.957 06:35:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56500 00:05:04.957 06:35:18 -- common/autotest_common.sh@936 -- # '[' -z 56500 ']' 00:05:04.957 06:35:18 -- common/autotest_common.sh@940 -- # kill -0 56500 00:05:04.957 06:35:18 -- common/autotest_common.sh@941 -- # uname 00:05:04.957 06:35:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.957 06:35:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56500 00:05:04.957 killing process with pid 56500 00:05:04.957 06:35:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.957 06:35:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.957 06:35:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56500' 00:05:04.957 06:35:18 -- common/autotest_common.sh@955 -- # kill 56500 00:05:04.957 06:35:18 -- common/autotest_common.sh@960 -- # wait 56500 00:05:05.526 ************************************ 00:05:05.526 END TEST dpdk_mem_utility 00:05:05.526 ************************************ 00:05:05.526 00:05:05.526 real 0m2.032s 00:05:05.526 user 0m2.096s 00:05:05.526 sys 0m0.560s 00:05:05.526 06:35:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.526 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.526 06:35:19 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.526 06:35:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.526 06:35:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.526 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.526 ************************************ 00:05:05.526 START TEST event 00:05:05.526 ************************************ 00:05:05.526 06:35:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.786 * Looking for test storage... 00:05:05.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.786 06:35:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.786 06:35:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.786 06:35:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.786 06:35:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.786 06:35:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.786 06:35:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.786 06:35:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.786 06:35:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.786 06:35:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.786 06:35:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.786 06:35:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.786 06:35:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.786 06:35:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.786 06:35:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.786 06:35:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.786 06:35:19 -- scripts/common.sh@344 -- # : 1 00:05:05.786 06:35:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.786 06:35:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.786 06:35:19 -- scripts/common.sh@364 -- # decimal 1 00:05:05.786 06:35:19 -- scripts/common.sh@352 -- # local d=1 00:05:05.786 06:35:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.786 06:35:19 -- scripts/common.sh@354 -- # echo 1 00:05:05.786 06:35:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.786 06:35:19 -- scripts/common.sh@365 -- # decimal 2 00:05:05.786 06:35:19 -- scripts/common.sh@352 -- # local d=2 00:05:05.786 06:35:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.786 06:35:19 -- scripts/common.sh@354 -- # echo 2 00:05:05.786 06:35:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.786 06:35:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.786 06:35:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.786 06:35:19 -- scripts/common.sh@367 -- # return 0 00:05:05.786 06:35:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.786 --rc genhtml_branch_coverage=1 00:05:05.786 --rc genhtml_function_coverage=1 00:05:05.786 --rc genhtml_legend=1 00:05:05.786 --rc geninfo_all_blocks=1 00:05:05.786 --rc geninfo_unexecuted_blocks=1 00:05:05.786 00:05:05.786 ' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.786 --rc genhtml_branch_coverage=1 00:05:05.786 --rc genhtml_function_coverage=1 00:05:05.786 --rc genhtml_legend=1 00:05:05.786 --rc geninfo_all_blocks=1 00:05:05.786 --rc geninfo_unexecuted_blocks=1 00:05:05.786 00:05:05.786 ' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.786 --rc genhtml_branch_coverage=1 00:05:05.786 --rc genhtml_function_coverage=1 00:05:05.786 --rc genhtml_legend=1 00:05:05.786 --rc geninfo_all_blocks=1 00:05:05.786 --rc geninfo_unexecuted_blocks=1 00:05:05.786 00:05:05.786 ' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.786 --rc genhtml_branch_coverage=1 00:05:05.786 --rc genhtml_function_coverage=1 00:05:05.786 --rc genhtml_legend=1 00:05:05.786 --rc geninfo_all_blocks=1 00:05:05.786 --rc geninfo_unexecuted_blocks=1 00:05:05.786 00:05:05.786 ' 00:05:05.786 06:35:19 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:05.786 06:35:19 -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.786 06:35:19 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.786 06:35:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:05.786 06:35:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.786 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.786 ************************************ 00:05:05.786 START TEST event_perf 00:05:05.786 ************************************ 00:05:05.786 06:35:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.786 Running I/O for 1 seconds...[2024-12-14 06:35:19.717931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.786 [2024-12-14 06:35:19.718208] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56603 ] 00:05:06.045 [2024-12-14 06:35:19.856220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.045 [2024-12-14 06:35:19.945920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.045 [2024-12-14 06:35:19.946057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.045 [2024-12-14 06:35:19.946307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.045 Running I/O for 1 seconds...[2024-12-14 06:35:19.946910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.425 00:05:07.425 lcore 0: 121595 00:05:07.425 lcore 1: 121598 00:05:07.425 lcore 2: 121596 00:05:07.425 lcore 3: 121599 00:05:07.425 done. 00:05:07.425 00:05:07.425 real 0m1.380s 00:05:07.425 user 0m4.178s 00:05:07.425 sys 0m0.079s 00:05:07.425 ************************************ 00:05:07.425 END TEST event_perf 00:05:07.425 ************************************ 00:05:07.425 06:35:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.425 06:35:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.425 06:35:21 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.425 06:35:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:07.425 06:35:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.425 06:35:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.425 ************************************ 00:05:07.425 START TEST event_reactor 00:05:07.425 ************************************ 00:05:07.425 06:35:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.425 [2024-12-14 06:35:21.150543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.425 [2024-12-14 06:35:21.150638] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56646 ] 00:05:07.425 [2024-12-14 06:35:21.280953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.425 [2024-12-14 06:35:21.371061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.802 test_start 00:05:08.802 oneshot 00:05:08.802 tick 100 00:05:08.802 tick 100 00:05:08.802 tick 250 00:05:08.802 tick 100 00:05:08.802 tick 100 00:05:08.802 tick 250 00:05:08.802 tick 100 00:05:08.802 tick 500 00:05:08.802 tick 100 00:05:08.802 tick 100 00:05:08.802 tick 250 00:05:08.802 tick 100 00:05:08.802 tick 100 00:05:08.802 test_end 00:05:08.802 00:05:08.802 real 0m1.366s 00:05:08.802 user 0m1.194s 00:05:08.802 sys 0m0.066s 00:05:08.802 06:35:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.802 06:35:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.802 ************************************ 00:05:08.803 END TEST event_reactor 00:05:08.803 ************************************ 00:05:08.803 06:35:22 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.803 06:35:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:08.803 06:35:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.803 06:35:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.803 ************************************ 00:05:08.803 START TEST event_reactor_perf 00:05:08.803 ************************************ 00:05:08.803 06:35:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.803 [2024-12-14 06:35:22.572471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.803 [2024-12-14 06:35:22.572574] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56677 ] 00:05:08.803 [2024-12-14 06:35:22.708064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.062 [2024-12-14 06:35:22.798710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.000 test_start 00:05:10.000 test_end 00:05:10.000 Performance: 439897 events per second 00:05:10.000 00:05:10.000 real 0m1.404s 00:05:10.000 user 0m1.236s 00:05:10.000 sys 0m0.060s 00:05:10.000 06:35:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.000 ************************************ 00:05:10.000 END TEST event_reactor_perf 00:05:10.000 ************************************ 00:05:10.000 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.260 06:35:23 -- event/event.sh@49 -- # uname -s 00:05:10.260 06:35:24 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.260 06:35:24 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.260 06:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.260 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.260 ************************************ 00:05:10.260 START TEST event_scheduler 00:05:10.260 ************************************ 00:05:10.260 06:35:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.260 * Looking for test storage... 00:05:10.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:10.260 06:35:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.260 06:35:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.260 06:35:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.260 06:35:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.260 06:35:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.260 06:35:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.260 06:35:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.260 06:35:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.260 06:35:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.260 06:35:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.260 06:35:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.260 06:35:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.260 06:35:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.260 06:35:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.260 06:35:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.260 06:35:24 -- scripts/common.sh@344 -- # : 1 00:05:10.260 06:35:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.260 06:35:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.260 06:35:24 -- scripts/common.sh@364 -- # decimal 1 00:05:10.260 06:35:24 -- scripts/common.sh@352 -- # local d=1 00:05:10.260 06:35:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.260 06:35:24 -- scripts/common.sh@354 -- # echo 1 00:05:10.260 06:35:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.260 06:35:24 -- scripts/common.sh@365 -- # decimal 2 00:05:10.260 06:35:24 -- scripts/common.sh@352 -- # local d=2 00:05:10.260 06:35:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.260 06:35:24 -- scripts/common.sh@354 -- # echo 2 00:05:10.260 06:35:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.260 06:35:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.260 06:35:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.260 06:35:24 -- scripts/common.sh@367 -- # return 0 00:05:10.260 06:35:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.260 --rc genhtml_branch_coverage=1 00:05:10.260 --rc genhtml_function_coverage=1 00:05:10.260 --rc genhtml_legend=1 00:05:10.260 --rc geninfo_all_blocks=1 00:05:10.260 --rc geninfo_unexecuted_blocks=1 00:05:10.260 00:05:10.260 ' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.260 --rc genhtml_branch_coverage=1 00:05:10.260 --rc genhtml_function_coverage=1 00:05:10.260 --rc genhtml_legend=1 00:05:10.260 --rc geninfo_all_blocks=1 00:05:10.260 --rc geninfo_unexecuted_blocks=1 00:05:10.260 00:05:10.260 ' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.260 --rc genhtml_branch_coverage=1 00:05:10.260 --rc genhtml_function_coverage=1 00:05:10.260 --rc genhtml_legend=1 00:05:10.260 --rc geninfo_all_blocks=1 00:05:10.260 --rc geninfo_unexecuted_blocks=1 00:05:10.260 00:05:10.260 ' 00:05:10.260 06:35:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.260 --rc genhtml_branch_coverage=1 00:05:10.260 --rc genhtml_function_coverage=1 00:05:10.260 --rc genhtml_legend=1 00:05:10.260 --rc geninfo_all_blocks=1 00:05:10.260 --rc geninfo_unexecuted_blocks=1 00:05:10.260 00:05:10.260 ' 00:05:10.260 06:35:24 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.261 06:35:24 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56751 00:05:10.261 06:35:24 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.261 06:35:24 -- scheduler/scheduler.sh@37 -- # waitforlisten 56751 00:05:10.261 06:35:24 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.261 06:35:24 -- common/autotest_common.sh@829 -- # '[' -z 56751 ']' 00:05:10.261 06:35:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.261 06:35:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.261 06:35:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.261 06:35:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.261 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.261 [2024-12-14 06:35:24.242852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.261 [2024-12-14 06:35:24.242985] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56751 ] 00:05:10.520 [2024-12-14 06:35:24.383188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.520 [2024-12-14 06:35:24.509514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.520 [2024-12-14 06:35:24.509691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.520 [2024-12-14 06:35:24.509879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.520 [2024-12-14 06:35:24.509886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.456 06:35:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.456 06:35:25 -- common/autotest_common.sh@862 -- # return 0 00:05:11.456 06:35:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.456 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.456 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.456 POWER: Env isn't set yet! 00:05:11.456 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:11.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.456 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.456 POWER: Attempting to initialise PSTAT power management... 00:05:11.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.456 POWER: Cannot set governor of lcore 0 to performance 00:05:11.456 POWER: Attempting to initialise AMD PSTATE power management... 00:05:11.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.456 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.456 POWER: Attempting to initialise CPPC power management... 00:05:11.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.456 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.456 POWER: Attempting to initialise VM power management... 00:05:11.456 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:11.456 POWER: Unable to set Power Management Environment for lcore 0 00:05:11.456 [2024-12-14 06:35:25.203170] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:11.456 [2024-12-14 06:35:25.203185] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:11.456 [2024-12-14 06:35:25.203194] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.456 [2024-12-14 06:35:25.203208] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.456 [2024-12-14 06:35:25.203216] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.456 [2024-12-14 06:35:25.203223] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.456 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.456 06:35:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.456 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.456 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.456 [2024-12-14 06:35:25.325020] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.456 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.456 06:35:25 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.456 06:35:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.456 06:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.456 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.456 ************************************ 00:05:11.456 START TEST scheduler_create_thread 00:05:11.456 ************************************ 00:05:11.456 06:35:25 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:11.456 06:35:25 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.456 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.456 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.456 2 00:05:11.456 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.456 06:35:25 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.456 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.456 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.456 3 00:05:11.456 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 4 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 5 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 6 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 7 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 8 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 9 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 10 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.457 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.457 06:35:25 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.457 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.457 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.361 06:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.361 06:35:26 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.361 06:35:26 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.361 06:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.361 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 06:35:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.298 00:05:14.298 real 0m2.615s 00:05:14.298 user 0m0.020s 00:05:14.298 sys 0m0.007s 00:05:14.298 06:35:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.298 06:35:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 ************************************ 00:05:14.298 END TEST scheduler_create_thread 00:05:14.298 ************************************ 00:05:14.298 06:35:27 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.298 06:35:27 -- scheduler/scheduler.sh@46 -- # killprocess 56751 00:05:14.298 06:35:27 -- common/autotest_common.sh@936 -- # '[' -z 56751 ']' 00:05:14.298 06:35:27 -- common/autotest_common.sh@940 -- # kill -0 56751 00:05:14.298 06:35:27 -- common/autotest_common.sh@941 -- # uname 00:05:14.298 06:35:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.298 06:35:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56751 00:05:14.298 killing process with pid 56751 00:05:14.298 06:35:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:14.298 06:35:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:14.298 06:35:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56751' 00:05:14.298 06:35:28 -- common/autotest_common.sh@955 -- # kill 56751 00:05:14.298 06:35:28 -- common/autotest_common.sh@960 -- # wait 56751 00:05:14.579 [2024-12-14 06:35:28.433523] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.843 ************************************ 00:05:14.843 END TEST event_scheduler 00:05:14.843 ************************************ 00:05:14.843 00:05:14.843 real 0m4.747s 00:05:14.843 user 0m8.627s 00:05:14.843 sys 0m0.464s 00:05:14.843 06:35:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.843 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.843 06:35:28 -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.843 06:35:28 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.843 06:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.843 06:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.843 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.843 ************************************ 00:05:14.843 START TEST app_repeat 00:05:14.843 ************************************ 00:05:14.843 06:35:28 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:14.843 06:35:28 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.843 06:35:28 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.843 06:35:28 -- event/event.sh@13 -- # local nbd_list 00:05:14.843 06:35:28 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.843 06:35:28 -- event/event.sh@14 -- # local bdev_list 00:05:14.843 06:35:28 -- event/event.sh@15 -- # local repeat_times=4 00:05:14.843 06:35:28 -- event/event.sh@17 -- # modprobe nbd 00:05:14.843 Process app_repeat pid: 56863 00:05:14.843 spdk_app_start Round 0 00:05:14.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.843 06:35:28 -- event/event.sh@19 -- # repeat_pid=56863 00:05:14.843 06:35:28 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.843 06:35:28 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56863' 00:05:14.843 06:35:28 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.843 06:35:28 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.843 06:35:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.843 06:35:28 -- event/event.sh@25 -- # waitforlisten 56863 /var/tmp/spdk-nbd.sock 00:05:14.843 06:35:28 -- common/autotest_common.sh@829 -- # '[' -z 56863 ']' 00:05:14.843 06:35:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.843 06:35:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.843 06:35:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.843 06:35:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.843 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.102 [2024-12-14 06:35:28.857813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.102 [2024-12-14 06:35:28.858008] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56863 ] 00:05:15.102 [2024-12-14 06:35:29.007854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.361 [2024-12-14 06:35:29.162977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.361 [2024-12-14 06:35:29.162982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.929 06:35:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.929 06:35:29 -- common/autotest_common.sh@862 -- # return 0 00:05:15.929 06:35:29 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.188 Malloc0 00:05:16.188 06:35:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.447 Malloc1 00:05:16.447 06:35:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.447 06:35:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.706 /dev/nbd0 00:05:16.706 06:35:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.706 06:35:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.706 06:35:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.706 06:35:30 -- common/autotest_common.sh@867 -- # local i 00:05:16.706 06:35:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.706 06:35:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.706 06:35:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.706 06:35:30 -- common/autotest_common.sh@871 -- # break 00:05:16.706 06:35:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.706 06:35:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.706 06:35:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.706 1+0 records in 00:05:16.707 1+0 records out 00:05:16.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347136 s, 11.8 MB/s 00:05:16.707 06:35:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.707 06:35:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.707 06:35:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.707 06:35:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.707 06:35:30 -- common/autotest_common.sh@887 -- # return 0 00:05:16.707 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.707 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.707 06:35:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.275 /dev/nbd1 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.275 06:35:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.275 06:35:30 -- common/autotest_common.sh@867 -- # local i 00:05:17.275 06:35:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.275 06:35:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.275 06:35:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.275 06:35:30 -- common/autotest_common.sh@871 -- # break 00:05:17.275 06:35:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.275 06:35:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.275 06:35:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.275 1+0 records in 00:05:17.275 1+0 records out 00:05:17.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287999 s, 14.2 MB/s 00:05:17.275 06:35:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.275 06:35:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:17.275 06:35:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.275 06:35:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.275 06:35:30 -- common/autotest_common.sh@887 -- # return 0 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.275 06:35:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.275 06:35:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.275 { 00:05:17.275 "bdev_name": "Malloc0", 00:05:17.275 "nbd_device": "/dev/nbd0" 00:05:17.275 }, 00:05:17.275 { 00:05:17.275 "bdev_name": "Malloc1", 00:05:17.275 "nbd_device": "/dev/nbd1" 00:05:17.275 } 00:05:17.275 ]' 00:05:17.275 06:35:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.275 { 00:05:17.275 "bdev_name": "Malloc0", 00:05:17.275 "nbd_device": "/dev/nbd0" 00:05:17.275 }, 00:05:17.275 { 00:05:17.275 "bdev_name": "Malloc1", 00:05:17.275 "nbd_device": "/dev/nbd1" 00:05:17.275 } 00:05:17.275 ]' 00:05:17.275 06:35:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.535 /dev/nbd1' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.535 /dev/nbd1' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.535 256+0 records in 00:05:17.535 256+0 records out 00:05:17.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108167 s, 96.9 MB/s 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.535 256+0 records in 00:05:17.535 256+0 records out 00:05:17.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02462 s, 42.6 MB/s 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.535 256+0 records in 00:05:17.535 256+0 records out 00:05:17.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267419 s, 39.2 MB/s 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.535 06:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@41 -- # break 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.794 06:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.053 06:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@41 -- # break 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.054 06:35:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@65 -- # true 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.313 06:35:32 -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.313 06:35:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.880 06:35:32 -- event/event.sh@35 -- # sleep 3 00:05:19.139 [2024-12-14 06:35:32.880514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.139 [2024-12-14 06:35:33.023568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.139 [2024-12-14 06:35:33.023578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.139 [2024-12-14 06:35:33.100190] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.139 [2024-12-14 06:35:33.100359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.675 06:35:35 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.675 06:35:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.675 spdk_app_start Round 1 00:05:21.675 06:35:35 -- event/event.sh@25 -- # waitforlisten 56863 /var/tmp/spdk-nbd.sock 00:05:21.675 06:35:35 -- common/autotest_common.sh@829 -- # '[' -z 56863 ']' 00:05:21.675 06:35:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.675 06:35:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.675 06:35:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.675 06:35:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.675 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.935 06:35:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.935 06:35:35 -- common/autotest_common.sh@862 -- # return 0 00:05:21.935 06:35:35 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.194 Malloc0 00:05:22.194 06:35:36 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.490 Malloc1 00:05:22.490 06:35:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@12 -- # local i 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.490 06:35:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.766 /dev/nbd0 00:05:22.766 06:35:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.766 06:35:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.766 06:35:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.766 06:35:36 -- common/autotest_common.sh@867 -- # local i 00:05:22.766 06:35:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.766 06:35:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.766 06:35:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.766 06:35:36 -- common/autotest_common.sh@871 -- # break 00:05:22.766 06:35:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.766 06:35:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.766 06:35:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.766 1+0 records in 00:05:22.766 1+0 records out 00:05:22.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251029 s, 16.3 MB/s 00:05:22.766 06:35:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.766 06:35:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.766 06:35:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.766 06:35:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.766 06:35:36 -- common/autotest_common.sh@887 -- # return 0 00:05:22.766 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.766 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.766 06:35:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.025 /dev/nbd1 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.025 06:35:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:23.025 06:35:36 -- common/autotest_common.sh@867 -- # local i 00:05:23.025 06:35:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.025 06:35:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.025 06:35:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:23.025 06:35:36 -- common/autotest_common.sh@871 -- # break 00:05:23.025 06:35:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.025 06:35:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.025 06:35:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.025 1+0 records in 00:05:23.025 1+0 records out 00:05:23.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299639 s, 13.7 MB/s 00:05:23.025 06:35:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.025 06:35:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:23.025 06:35:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.025 06:35:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.025 06:35:36 -- common/autotest_common.sh@887 -- # return 0 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.025 06:35:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.285 { 00:05:23.285 "bdev_name": "Malloc0", 00:05:23.285 "nbd_device": "/dev/nbd0" 00:05:23.285 }, 00:05:23.285 { 00:05:23.285 "bdev_name": "Malloc1", 00:05:23.285 "nbd_device": "/dev/nbd1" 00:05:23.285 } 00:05:23.285 ]' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.285 { 00:05:23.285 "bdev_name": "Malloc0", 00:05:23.285 "nbd_device": "/dev/nbd0" 00:05:23.285 }, 00:05:23.285 { 00:05:23.285 "bdev_name": "Malloc1", 00:05:23.285 "nbd_device": "/dev/nbd1" 00:05:23.285 } 00:05:23.285 ]' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.285 /dev/nbd1' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.285 /dev/nbd1' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.285 256+0 records in 00:05:23.285 256+0 records out 00:05:23.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106092 s, 98.8 MB/s 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.285 256+0 records in 00:05:23.285 256+0 records out 00:05:23.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240133 s, 43.7 MB/s 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.285 06:35:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.545 256+0 records in 00:05:23.545 256+0 records out 00:05:23.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307563 s, 34.1 MB/s 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@51 -- # local i 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.545 06:35:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@41 -- # break 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.804 06:35:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@41 -- # break 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.063 06:35:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@65 -- # true 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.322 06:35:38 -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.322 06:35:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.580 06:35:38 -- event/event.sh@35 -- # sleep 3 00:05:24.838 [2024-12-14 06:35:38.796121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.096 [2024-12-14 06:35:38.871432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.096 [2024-12-14 06:35:38.871439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.096 [2024-12-14 06:35:38.948159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.096 [2024-12-14 06:35:38.948259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.629 spdk_app_start Round 2 00:05:27.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.629 06:35:41 -- event/event.sh@23 -- # for i in {0..2} 00:05:27.629 06:35:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.629 06:35:41 -- event/event.sh@25 -- # waitforlisten 56863 /var/tmp/spdk-nbd.sock 00:05:27.629 06:35:41 -- common/autotest_common.sh@829 -- # '[' -z 56863 ']' 00:05:27.629 06:35:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.629 06:35:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.629 06:35:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.629 06:35:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.629 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.887 06:35:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.887 06:35:41 -- common/autotest_common.sh@862 -- # return 0 00:05:27.887 06:35:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.145 Malloc0 00:05:28.145 06:35:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.403 Malloc1 00:05:28.403 06:35:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.403 06:35:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.661 /dev/nbd0 00:05:28.661 06:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.920 06:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.920 06:35:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.920 06:35:42 -- common/autotest_common.sh@867 -- # local i 00:05:28.920 06:35:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.920 06:35:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.920 06:35:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.920 06:35:42 -- common/autotest_common.sh@871 -- # break 00:05:28.920 06:35:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.920 06:35:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.920 06:35:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.920 1+0 records in 00:05:28.920 1+0 records out 00:05:28.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218421 s, 18.8 MB/s 00:05:28.920 06:35:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.920 06:35:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:28.920 06:35:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.920 06:35:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.920 06:35:42 -- common/autotest_common.sh@887 -- # return 0 00:05:28.920 06:35:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.920 06:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.920 06:35:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.179 /dev/nbd1 00:05:29.179 06:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.179 06:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.179 06:35:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:29.179 06:35:42 -- common/autotest_common.sh@867 -- # local i 00:05:29.179 06:35:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.179 06:35:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.179 06:35:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:29.179 06:35:42 -- common/autotest_common.sh@871 -- # break 00:05:29.179 06:35:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.179 06:35:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.179 06:35:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.179 1+0 records in 00:05:29.179 1+0 records out 00:05:29.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431136 s, 9.5 MB/s 00:05:29.180 06:35:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.180 06:35:43 -- common/autotest_common.sh@884 -- # size=4096 00:05:29.180 06:35:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.180 06:35:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.180 06:35:43 -- common/autotest_common.sh@887 -- # return 0 00:05:29.180 06:35:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.180 06:35:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.180 06:35:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.180 06:35:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.180 06:35:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.439 { 00:05:29.439 "bdev_name": "Malloc0", 00:05:29.439 "nbd_device": "/dev/nbd0" 00:05:29.439 }, 00:05:29.439 { 00:05:29.439 "bdev_name": "Malloc1", 00:05:29.439 "nbd_device": "/dev/nbd1" 00:05:29.439 } 00:05:29.439 ]' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.439 { 00:05:29.439 "bdev_name": "Malloc0", 00:05:29.439 "nbd_device": "/dev/nbd0" 00:05:29.439 }, 00:05:29.439 { 00:05:29.439 "bdev_name": "Malloc1", 00:05:29.439 "nbd_device": "/dev/nbd1" 00:05:29.439 } 00:05:29.439 ]' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.439 /dev/nbd1' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.439 /dev/nbd1' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.439 256+0 records in 00:05:29.439 256+0 records out 00:05:29.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075847 s, 138 MB/s 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.439 256+0 records in 00:05:29.439 256+0 records out 00:05:29.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236354 s, 44.4 MB/s 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.439 06:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.698 256+0 records in 00:05:29.698 256+0 records out 00:05:29.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299633 s, 35.0 MB/s 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@51 -- # local i 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.698 06:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@41 -- # break 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.956 06:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.215 06:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@41 -- # break 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.215 06:35:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@65 -- # true 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.473 06:35:44 -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.473 06:35:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.732 06:35:44 -- event/event.sh@35 -- # sleep 3 00:05:31.302 [2024-12-14 06:35:45.029093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.302 [2024-12-14 06:35:45.164087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.302 [2024-12-14 06:35:45.164097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.302 [2024-12-14 06:35:45.240527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.302 [2024-12-14 06:35:45.240621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.834 06:35:47 -- event/event.sh@38 -- # waitforlisten 56863 /var/tmp/spdk-nbd.sock 00:05:33.834 06:35:47 -- common/autotest_common.sh@829 -- # '[' -z 56863 ']' 00:05:33.834 06:35:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.834 06:35:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.834 06:35:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.834 06:35:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.834 06:35:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.093 06:35:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.093 06:35:47 -- common/autotest_common.sh@862 -- # return 0 00:05:34.093 06:35:47 -- event/event.sh@39 -- # killprocess 56863 00:05:34.093 06:35:47 -- common/autotest_common.sh@936 -- # '[' -z 56863 ']' 00:05:34.093 06:35:47 -- common/autotest_common.sh@940 -- # kill -0 56863 00:05:34.093 06:35:47 -- common/autotest_common.sh@941 -- # uname 00:05:34.093 06:35:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.093 06:35:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56863 00:05:34.093 06:35:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.093 killing process with pid 56863 00:05:34.093 06:35:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.093 06:35:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56863' 00:05:34.093 06:35:48 -- common/autotest_common.sh@955 -- # kill 56863 00:05:34.093 06:35:48 -- common/autotest_common.sh@960 -- # wait 56863 00:05:34.352 spdk_app_start is called in Round 0. 00:05:34.352 Shutdown signal received, stop current app iteration 00:05:34.352 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.352 spdk_app_start is called in Round 1. 00:05:34.352 Shutdown signal received, stop current app iteration 00:05:34.352 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.352 spdk_app_start is called in Round 2. 00:05:34.352 Shutdown signal received, stop current app iteration 00:05:34.352 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.352 spdk_app_start is called in Round 3. 00:05:34.352 Shutdown signal received, stop current app iteration 00:05:34.352 ************************************ 00:05:34.352 END TEST app_repeat 00:05:34.352 ************************************ 00:05:34.352 06:35:48 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.352 06:35:48 -- event/event.sh@42 -- # return 0 00:05:34.352 00:05:34.352 real 0m19.493s 00:05:34.352 user 0m43.132s 00:05:34.352 sys 0m3.304s 00:05:34.352 06:35:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.352 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.611 06:35:48 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:34.611 06:35:48 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.611 06:35:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.611 06:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.611 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.611 ************************************ 00:05:34.611 START TEST cpu_locks 00:05:34.611 ************************************ 00:05:34.611 06:35:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.611 * Looking for test storage... 00:05:34.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:34.611 06:35:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.611 06:35:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.611 06:35:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.611 06:35:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.611 06:35:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.612 06:35:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.612 06:35:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.612 06:35:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.612 06:35:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.612 06:35:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.612 06:35:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.612 06:35:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.612 06:35:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.612 06:35:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.612 06:35:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.612 06:35:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.612 06:35:48 -- scripts/common.sh@344 -- # : 1 00:05:34.612 06:35:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.612 06:35:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.612 06:35:48 -- scripts/common.sh@364 -- # decimal 1 00:05:34.612 06:35:48 -- scripts/common.sh@352 -- # local d=1 00:05:34.612 06:35:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.612 06:35:48 -- scripts/common.sh@354 -- # echo 1 00:05:34.612 06:35:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.612 06:35:48 -- scripts/common.sh@365 -- # decimal 2 00:05:34.612 06:35:48 -- scripts/common.sh@352 -- # local d=2 00:05:34.612 06:35:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.612 06:35:48 -- scripts/common.sh@354 -- # echo 2 00:05:34.612 06:35:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.612 06:35:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.612 06:35:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.612 06:35:48 -- scripts/common.sh@367 -- # return 0 00:05:34.612 06:35:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.612 06:35:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.612 --rc genhtml_branch_coverage=1 00:05:34.612 --rc genhtml_function_coverage=1 00:05:34.612 --rc genhtml_legend=1 00:05:34.612 --rc geninfo_all_blocks=1 00:05:34.612 --rc geninfo_unexecuted_blocks=1 00:05:34.612 00:05:34.612 ' 00:05:34.612 06:35:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.612 --rc genhtml_branch_coverage=1 00:05:34.612 --rc genhtml_function_coverage=1 00:05:34.612 --rc genhtml_legend=1 00:05:34.612 --rc geninfo_all_blocks=1 00:05:34.612 --rc geninfo_unexecuted_blocks=1 00:05:34.612 00:05:34.612 ' 00:05:34.612 06:35:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.612 --rc genhtml_branch_coverage=1 00:05:34.612 --rc genhtml_function_coverage=1 00:05:34.612 --rc genhtml_legend=1 00:05:34.612 --rc geninfo_all_blocks=1 00:05:34.612 --rc geninfo_unexecuted_blocks=1 00:05:34.612 00:05:34.612 ' 00:05:34.612 06:35:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.612 --rc genhtml_branch_coverage=1 00:05:34.612 --rc genhtml_function_coverage=1 00:05:34.612 --rc genhtml_legend=1 00:05:34.612 --rc geninfo_all_blocks=1 00:05:34.612 --rc geninfo_unexecuted_blocks=1 00:05:34.612 00:05:34.612 ' 00:05:34.612 06:35:48 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:34.612 06:35:48 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:34.612 06:35:48 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:34.612 06:35:48 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:34.612 06:35:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.612 06:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.612 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 ************************************ 00:05:34.612 START TEST default_locks 00:05:34.612 ************************************ 00:05:34.612 06:35:48 -- common/autotest_common.sh@1114 -- # default_locks 00:05:34.612 06:35:48 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57507 00:05:34.612 06:35:48 -- event/cpu_locks.sh@47 -- # waitforlisten 57507 00:05:34.612 06:35:48 -- common/autotest_common.sh@829 -- # '[' -z 57507 ']' 00:05:34.612 06:35:48 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.612 06:35:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.612 06:35:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.612 06:35:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.612 06:35:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.612 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.871 [2024-12-14 06:35:48.631652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.871 [2024-12-14 06:35:48.631771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57507 ] 00:05:34.871 [2024-12-14 06:35:48.760432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.871 [2024-12-14 06:35:48.851881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.871 [2024-12-14 06:35:48.852072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.806 06:35:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.806 06:35:49 -- common/autotest_common.sh@862 -- # return 0 00:05:35.806 06:35:49 -- event/cpu_locks.sh@49 -- # locks_exist 57507 00:05:35.806 06:35:49 -- event/cpu_locks.sh@22 -- # lslocks -p 57507 00:05:35.806 06:35:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.064 06:35:49 -- event/cpu_locks.sh@50 -- # killprocess 57507 00:05:36.064 06:35:49 -- common/autotest_common.sh@936 -- # '[' -z 57507 ']' 00:05:36.064 06:35:49 -- common/autotest_common.sh@940 -- # kill -0 57507 00:05:36.064 06:35:49 -- common/autotest_common.sh@941 -- # uname 00:05:36.064 06:35:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.064 06:35:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57507 00:05:36.064 06:35:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.064 06:35:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.064 killing process with pid 57507 00:05:36.064 06:35:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57507' 00:05:36.064 06:35:49 -- common/autotest_common.sh@955 -- # kill 57507 00:05:36.064 06:35:49 -- common/autotest_common.sh@960 -- # wait 57507 00:05:36.710 06:35:50 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57507 00:05:36.710 06:35:50 -- common/autotest_common.sh@650 -- # local es=0 00:05:36.710 06:35:50 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57507 00:05:36.710 06:35:50 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:36.710 06:35:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.710 06:35:50 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:36.710 06:35:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.710 06:35:50 -- common/autotest_common.sh@653 -- # waitforlisten 57507 00:05:36.710 06:35:50 -- common/autotest_common.sh@829 -- # '[' -z 57507 ']' 00:05:36.710 06:35:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.710 06:35:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.710 06:35:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.710 06:35:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.710 06:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.710 ERROR: process (pid: 57507) is no longer running 00:05:36.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57507) - No such process 00:05:36.710 06:35:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.710 06:35:50 -- common/autotest_common.sh@862 -- # return 1 00:05:36.710 06:35:50 -- common/autotest_common.sh@653 -- # es=1 00:05:36.710 06:35:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.710 06:35:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:36.710 06:35:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.710 06:35:50 -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.710 06:35:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.710 06:35:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.710 06:35:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.710 00:05:36.710 real 0m1.971s 00:05:36.710 user 0m1.999s 00:05:36.710 sys 0m0.608s 00:05:36.710 06:35:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.710 06:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.710 ************************************ 00:05:36.710 END TEST default_locks 00:05:36.710 ************************************ 00:05:36.710 06:35:50 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.710 06:35:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.710 06:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.710 06:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.710 ************************************ 00:05:36.710 START TEST default_locks_via_rpc 00:05:36.710 ************************************ 00:05:36.710 06:35:50 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:36.710 06:35:50 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57571 00:05:36.710 06:35:50 -- event/cpu_locks.sh@63 -- # waitforlisten 57571 00:05:36.710 06:35:50 -- common/autotest_common.sh@829 -- # '[' -z 57571 ']' 00:05:36.710 06:35:50 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.710 06:35:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.710 06:35:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.710 06:35:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.710 06:35:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.710 06:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.710 [2024-12-14 06:35:50.656559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.710 [2024-12-14 06:35:50.656691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57571 ] 00:05:36.969 [2024-12-14 06:35:50.789094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.969 [2024-12-14 06:35:50.915290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.969 [2024-12-14 06:35:50.915524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.902 06:35:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.902 06:35:51 -- common/autotest_common.sh@862 -- # return 0 00:05:37.902 06:35:51 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.902 06:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.902 06:35:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.902 06:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.902 06:35:51 -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.902 06:35:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.902 06:35:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.902 06:35:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.902 06:35:51 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.902 06:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.902 06:35:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.902 06:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.902 06:35:51 -- event/cpu_locks.sh@71 -- # locks_exist 57571 00:05:37.902 06:35:51 -- event/cpu_locks.sh@22 -- # lslocks -p 57571 00:05:37.902 06:35:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.161 06:35:52 -- event/cpu_locks.sh@73 -- # killprocess 57571 00:05:38.161 06:35:52 -- common/autotest_common.sh@936 -- # '[' -z 57571 ']' 00:05:38.161 06:35:52 -- common/autotest_common.sh@940 -- # kill -0 57571 00:05:38.161 06:35:52 -- common/autotest_common.sh@941 -- # uname 00:05:38.161 06:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.161 06:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57571 00:05:38.161 06:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.161 06:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.161 killing process with pid 57571 00:05:38.161 06:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57571' 00:05:38.161 06:35:52 -- common/autotest_common.sh@955 -- # kill 57571 00:05:38.161 06:35:52 -- common/autotest_common.sh@960 -- # wait 57571 00:05:39.096 00:05:39.096 real 0m2.135s 00:05:39.096 user 0m2.214s 00:05:39.096 sys 0m0.650s 00:05:39.096 06:35:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.096 06:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 ************************************ 00:05:39.096 END TEST default_locks_via_rpc 00:05:39.096 ************************************ 00:05:39.096 06:35:52 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.096 06:35:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.096 06:35:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.096 06:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 ************************************ 00:05:39.096 START TEST non_locking_app_on_locked_coremask 00:05:39.096 ************************************ 00:05:39.096 06:35:52 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:39.096 06:35:52 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57640 00:05:39.096 06:35:52 -- event/cpu_locks.sh@81 -- # waitforlisten 57640 /var/tmp/spdk.sock 00:05:39.096 06:35:52 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.096 06:35:52 -- common/autotest_common.sh@829 -- # '[' -z 57640 ']' 00:05:39.096 06:35:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.096 06:35:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.096 06:35:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.096 06:35:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.096 06:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.097 [2024-12-14 06:35:52.850622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.097 [2024-12-14 06:35:52.850749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57640 ] 00:05:39.097 [2024-12-14 06:35:52.990554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.355 [2024-12-14 06:35:53.145162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.355 [2024-12-14 06:35:53.145390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.922 06:35:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.922 06:35:53 -- common/autotest_common.sh@862 -- # return 0 00:05:39.922 06:35:53 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57668 00:05:39.922 06:35:53 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:39.922 06:35:53 -- event/cpu_locks.sh@85 -- # waitforlisten 57668 /var/tmp/spdk2.sock 00:05:39.922 06:35:53 -- common/autotest_common.sh@829 -- # '[' -z 57668 ']' 00:05:39.922 06:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.922 06:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.922 06:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.922 06:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.922 06:35:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.922 [2024-12-14 06:35:53.868730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.922 [2024-12-14 06:35:53.868856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:05:40.181 [2024-12-14 06:35:54.011681] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.181 [2024-12-14 06:35:54.011758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.439 [2024-12-14 06:35:54.298492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.439 [2024-12-14 06:35:54.298707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.814 06:35:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.814 06:35:55 -- common/autotest_common.sh@862 -- # return 0 00:05:41.814 06:35:55 -- event/cpu_locks.sh@87 -- # locks_exist 57640 00:05:41.814 06:35:55 -- event/cpu_locks.sh@22 -- # lslocks -p 57640 00:05:41.814 06:35:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.750 06:35:56 -- event/cpu_locks.sh@89 -- # killprocess 57640 00:05:42.750 06:35:56 -- common/autotest_common.sh@936 -- # '[' -z 57640 ']' 00:05:42.750 06:35:56 -- common/autotest_common.sh@940 -- # kill -0 57640 00:05:42.750 06:35:56 -- common/autotest_common.sh@941 -- # uname 00:05:42.750 06:35:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.750 06:35:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57640 00:05:42.750 killing process with pid 57640 00:05:42.750 06:35:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.750 06:35:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.750 06:35:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57640' 00:05:42.750 06:35:56 -- common/autotest_common.sh@955 -- # kill 57640 00:05:42.750 06:35:56 -- common/autotest_common.sh@960 -- # wait 57640 00:05:43.686 06:35:57 -- event/cpu_locks.sh@90 -- # killprocess 57668 00:05:43.686 06:35:57 -- common/autotest_common.sh@936 -- # '[' -z 57668 ']' 00:05:43.686 06:35:57 -- common/autotest_common.sh@940 -- # kill -0 57668 00:05:43.686 06:35:57 -- common/autotest_common.sh@941 -- # uname 00:05:43.686 06:35:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.687 06:35:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57668 00:05:43.945 killing process with pid 57668 00:05:43.945 06:35:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.945 06:35:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.945 06:35:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57668' 00:05:43.945 06:35:57 -- common/autotest_common.sh@955 -- # kill 57668 00:05:43.945 06:35:57 -- common/autotest_common.sh@960 -- # wait 57668 00:05:44.513 ************************************ 00:05:44.513 END TEST non_locking_app_on_locked_coremask 00:05:44.513 ************************************ 00:05:44.513 00:05:44.513 real 0m5.483s 00:05:44.513 user 0m5.992s 00:05:44.513 sys 0m1.375s 00:05:44.513 06:35:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.513 06:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 06:35:58 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.513 06:35:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.513 06:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.513 06:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 ************************************ 00:05:44.513 START TEST locking_app_on_unlocked_coremask 00:05:44.513 ************************************ 00:05:44.513 06:35:58 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:44.513 06:35:58 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57771 00:05:44.513 06:35:58 -- event/cpu_locks.sh@99 -- # waitforlisten 57771 /var/tmp/spdk.sock 00:05:44.513 06:35:58 -- common/autotest_common.sh@829 -- # '[' -z 57771 ']' 00:05:44.513 06:35:58 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.513 06:35:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.513 06:35:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.513 06:35:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.513 06:35:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.513 06:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 [2024-12-14 06:35:58.387038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.513 [2024-12-14 06:35:58.387200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57771 ] 00:05:44.772 [2024-12-14 06:35:58.519511] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.772 [2024-12-14 06:35:58.519583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.772 [2024-12-14 06:35:58.654455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.772 [2024-12-14 06:35:58.654669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.707 06:35:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.707 06:35:59 -- common/autotest_common.sh@862 -- # return 0 00:05:45.707 06:35:59 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.707 06:35:59 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57798 00:05:45.707 06:35:59 -- event/cpu_locks.sh@103 -- # waitforlisten 57798 /var/tmp/spdk2.sock 00:05:45.707 06:35:59 -- common/autotest_common.sh@829 -- # '[' -z 57798 ']' 00:05:45.707 06:35:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.707 06:35:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.707 06:35:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.707 06:35:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.707 06:35:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.707 [2024-12-14 06:35:59.394958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.707 [2024-12-14 06:35:59.395081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:05:45.707 [2024-12-14 06:35:59.530944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.967 [2024-12-14 06:35:59.739149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.967 [2024-12-14 06:35:59.739362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.556 06:36:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.556 06:36:00 -- common/autotest_common.sh@862 -- # return 0 00:05:46.556 06:36:00 -- event/cpu_locks.sh@105 -- # locks_exist 57798 00:05:46.556 06:36:00 -- event/cpu_locks.sh@22 -- # lslocks -p 57798 00:05:46.556 06:36:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.488 06:36:01 -- event/cpu_locks.sh@107 -- # killprocess 57771 00:05:47.488 06:36:01 -- common/autotest_common.sh@936 -- # '[' -z 57771 ']' 00:05:47.488 06:36:01 -- common/autotest_common.sh@940 -- # kill -0 57771 00:05:47.488 06:36:01 -- common/autotest_common.sh@941 -- # uname 00:05:47.488 06:36:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.488 06:36:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57771 00:05:47.488 06:36:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.488 killing process with pid 57771 00:05:47.488 06:36:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.488 06:36:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57771' 00:05:47.488 06:36:01 -- common/autotest_common.sh@955 -- # kill 57771 00:05:47.488 06:36:01 -- common/autotest_common.sh@960 -- # wait 57771 00:05:48.862 06:36:02 -- event/cpu_locks.sh@108 -- # killprocess 57798 00:05:48.862 06:36:02 -- common/autotest_common.sh@936 -- # '[' -z 57798 ']' 00:05:48.862 06:36:02 -- common/autotest_common.sh@940 -- # kill -0 57798 00:05:48.862 06:36:02 -- common/autotest_common.sh@941 -- # uname 00:05:48.862 06:36:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.862 06:36:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57798 00:05:48.862 06:36:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.862 06:36:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.862 killing process with pid 57798 00:05:48.862 06:36:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57798' 00:05:48.862 06:36:02 -- common/autotest_common.sh@955 -- # kill 57798 00:05:48.862 06:36:02 -- common/autotest_common.sh@960 -- # wait 57798 00:05:49.428 00:05:49.428 real 0m4.860s 00:05:49.428 user 0m5.138s 00:05:49.428 sys 0m1.271s 00:05:49.428 06:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.428 ************************************ 00:05:49.428 END TEST locking_app_on_unlocked_coremask 00:05:49.428 ************************************ 00:05:49.428 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.428 06:36:03 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.428 06:36:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.428 06:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.428 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.428 ************************************ 00:05:49.428 START TEST locking_app_on_locked_coremask 00:05:49.428 ************************************ 00:05:49.428 06:36:03 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:49.428 06:36:03 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57884 00:05:49.428 06:36:03 -- event/cpu_locks.sh@116 -- # waitforlisten 57884 /var/tmp/spdk.sock 00:05:49.428 06:36:03 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.428 06:36:03 -- common/autotest_common.sh@829 -- # '[' -z 57884 ']' 00:05:49.428 06:36:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.428 06:36:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.428 06:36:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.428 06:36:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.428 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.428 [2024-12-14 06:36:03.307332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.428 [2024-12-14 06:36:03.307510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57884 ] 00:05:49.686 [2024-12-14 06:36:03.447084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.686 [2024-12-14 06:36:03.596849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.686 [2024-12-14 06:36:03.597082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.622 06:36:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.622 06:36:04 -- common/autotest_common.sh@862 -- # return 0 00:05:50.622 06:36:04 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57912 00:05:50.622 06:36:04 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.622 06:36:04 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57912 /var/tmp/spdk2.sock 00:05:50.622 06:36:04 -- common/autotest_common.sh@650 -- # local es=0 00:05:50.622 06:36:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57912 /var/tmp/spdk2.sock 00:05:50.622 06:36:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.622 06:36:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.622 06:36:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.622 06:36:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.622 06:36:04 -- common/autotest_common.sh@653 -- # waitforlisten 57912 /var/tmp/spdk2.sock 00:05:50.622 06:36:04 -- common/autotest_common.sh@829 -- # '[' -z 57912 ']' 00:05:50.622 06:36:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.622 06:36:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.622 06:36:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.622 06:36:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.622 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.622 [2024-12-14 06:36:04.382650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.622 [2024-12-14 06:36:04.382801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:05:50.622 [2024-12-14 06:36:04.529430] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57884 has claimed it. 00:05:50.622 [2024-12-14 06:36:04.529522] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.189 ERROR: process (pid: 57912) is no longer running 00:05:51.189 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57912) - No such process 00:05:51.189 06:36:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.189 06:36:05 -- common/autotest_common.sh@862 -- # return 1 00:05:51.189 06:36:05 -- common/autotest_common.sh@653 -- # es=1 00:05:51.189 06:36:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.189 06:36:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.189 06:36:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.189 06:36:05 -- event/cpu_locks.sh@122 -- # locks_exist 57884 00:05:51.189 06:36:05 -- event/cpu_locks.sh@22 -- # lslocks -p 57884 00:05:51.189 06:36:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.447 06:36:05 -- event/cpu_locks.sh@124 -- # killprocess 57884 00:05:51.447 06:36:05 -- common/autotest_common.sh@936 -- # '[' -z 57884 ']' 00:05:51.447 06:36:05 -- common/autotest_common.sh@940 -- # kill -0 57884 00:05:51.447 06:36:05 -- common/autotest_common.sh@941 -- # uname 00:05:51.447 06:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.447 06:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57884 00:05:51.447 06:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.447 killing process with pid 57884 00:05:51.447 06:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.447 06:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57884' 00:05:51.447 06:36:05 -- common/autotest_common.sh@955 -- # kill 57884 00:05:51.447 06:36:05 -- common/autotest_common.sh@960 -- # wait 57884 00:05:52.383 00:05:52.383 real 0m2.810s 00:05:52.383 user 0m3.105s 00:05:52.383 sys 0m0.743s 00:05:52.383 06:36:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.383 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 END TEST locking_app_on_locked_coremask 00:05:52.383 ************************************ 00:05:52.383 06:36:06 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.383 06:36:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.383 06:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.383 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 START TEST locking_overlapped_coremask 00:05:52.383 ************************************ 00:05:52.383 06:36:06 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:52.383 06:36:06 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57969 00:05:52.383 06:36:06 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.383 06:36:06 -- event/cpu_locks.sh@133 -- # waitforlisten 57969 /var/tmp/spdk.sock 00:05:52.383 06:36:06 -- common/autotest_common.sh@829 -- # '[' -z 57969 ']' 00:05:52.383 06:36:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.383 06:36:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.383 06:36:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.383 06:36:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.383 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 [2024-12-14 06:36:06.164786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.383 [2024-12-14 06:36:06.164923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:05:52.383 [2024-12-14 06:36:06.299327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.642 [2024-12-14 06:36:06.449843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.642 [2024-12-14 06:36:06.450454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.642 [2024-12-14 06:36:06.450629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.642 [2024-12-14 06:36:06.450634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.578 06:36:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.578 06:36:07 -- common/autotest_common.sh@862 -- # return 0 00:05:53.578 06:36:07 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57999 00:05:53.578 06:36:07 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57999 /var/tmp/spdk2.sock 00:05:53.578 06:36:07 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.578 06:36:07 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.578 06:36:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57999 /var/tmp/spdk2.sock 00:05:53.578 06:36:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.578 06:36:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.578 06:36:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.578 06:36:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.578 06:36:07 -- common/autotest_common.sh@653 -- # waitforlisten 57999 /var/tmp/spdk2.sock 00:05:53.578 06:36:07 -- common/autotest_common.sh@829 -- # '[' -z 57999 ']' 00:05:53.578 06:36:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.578 06:36:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.578 06:36:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.578 06:36:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.578 06:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.578 [2024-12-14 06:36:07.309300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.578 [2024-12-14 06:36:07.309439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:05:53.578 [2024-12-14 06:36:07.458137] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57969 has claimed it. 00:05:53.578 [2024-12-14 06:36:07.458228] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.146 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57999) - No such process 00:05:54.146 ERROR: process (pid: 57999) is no longer running 00:05:54.146 06:36:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.146 06:36:08 -- common/autotest_common.sh@862 -- # return 1 00:05:54.146 06:36:08 -- common/autotest_common.sh@653 -- # es=1 00:05:54.146 06:36:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.146 06:36:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.146 06:36:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.146 06:36:08 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.146 06:36:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.146 06:36:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.146 06:36:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.146 06:36:08 -- event/cpu_locks.sh@141 -- # killprocess 57969 00:05:54.146 06:36:08 -- common/autotest_common.sh@936 -- # '[' -z 57969 ']' 00:05:54.146 06:36:08 -- common/autotest_common.sh@940 -- # kill -0 57969 00:05:54.146 06:36:08 -- common/autotest_common.sh@941 -- # uname 00:05:54.146 06:36:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.146 06:36:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57969 00:05:54.146 06:36:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.146 06:36:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.146 killing process with pid 57969 00:05:54.146 06:36:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57969' 00:05:54.146 06:36:08 -- common/autotest_common.sh@955 -- # kill 57969 00:05:54.146 06:36:08 -- common/autotest_common.sh@960 -- # wait 57969 00:05:54.782 00:05:54.782 real 0m2.581s 00:05:54.782 user 0m6.984s 00:05:54.782 sys 0m0.584s 00:05:54.782 06:36:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.782 ************************************ 00:05:54.782 END TEST locking_overlapped_coremask 00:05:54.782 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.782 ************************************ 00:05:54.782 06:36:08 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.782 06:36:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.782 06:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.782 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.782 ************************************ 00:05:54.782 START TEST locking_overlapped_coremask_via_rpc 00:05:54.782 ************************************ 00:05:54.782 06:36:08 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:54.782 06:36:08 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58049 00:05:54.782 06:36:08 -- event/cpu_locks.sh@149 -- # waitforlisten 58049 /var/tmp/spdk.sock 00:05:54.782 06:36:08 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.782 06:36:08 -- common/autotest_common.sh@829 -- # '[' -z 58049 ']' 00:05:54.782 06:36:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.782 06:36:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.782 06:36:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.782 06:36:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.782 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.041 [2024-12-14 06:36:08.803959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.041 [2024-12-14 06:36:08.804088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58049 ] 00:05:55.041 [2024-12-14 06:36:08.939570] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.041 [2024-12-14 06:36:08.939658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.299 [2024-12-14 06:36:09.091882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.299 [2024-12-14 06:36:09.092283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.299 [2024-12-14 06:36:09.092386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.299 [2024-12-14 06:36:09.092392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.866 06:36:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.866 06:36:09 -- common/autotest_common.sh@862 -- # return 0 00:05:55.866 06:36:09 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58081 00:05:55.866 06:36:09 -- event/cpu_locks.sh@153 -- # waitforlisten 58081 /var/tmp/spdk2.sock 00:05:55.866 06:36:09 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.866 06:36:09 -- common/autotest_common.sh@829 -- # '[' -z 58081 ']' 00:05:55.866 06:36:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.866 06:36:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.866 06:36:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.866 06:36:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.866 06:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.866 [2024-12-14 06:36:09.854626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.866 [2024-12-14 06:36:09.854751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58081 ] 00:05:56.124 [2024-12-14 06:36:10.002975] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.124 [2024-12-14 06:36:10.003046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.383 [2024-12-14 06:36:10.299129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.383 [2024-12-14 06:36:10.299468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.383 [2024-12-14 06:36:10.301068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.383 [2024-12-14 06:36:10.301069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.758 06:36:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.758 06:36:11 -- common/autotest_common.sh@862 -- # return 0 00:05:57.758 06:36:11 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.758 06:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.758 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.758 06:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.758 06:36:11 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.758 06:36:11 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.758 06:36:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.758 06:36:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:57.758 06:36:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.758 06:36:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:57.758 06:36:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.758 06:36:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.758 06:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.758 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.758 [2024-12-14 06:36:11.597114] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58049 has claimed it. 00:05:57.758 2024/12/14 06:36:11 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:57.758 request: 00:05:57.758 { 00:05:57.758 "method": "framework_enable_cpumask_locks", 00:05:57.758 "params": {} 00:05:57.759 } 00:05:57.759 Got JSON-RPC error response 00:05:57.759 GoRPCClient: error on JSON-RPC call 00:05:57.759 06:36:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.759 06:36:11 -- common/autotest_common.sh@653 -- # es=1 00:05:57.759 06:36:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.759 06:36:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.759 06:36:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.759 06:36:11 -- event/cpu_locks.sh@158 -- # waitforlisten 58049 /var/tmp/spdk.sock 00:05:57.759 06:36:11 -- common/autotest_common.sh@829 -- # '[' -z 58049 ']' 00:05:57.759 06:36:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.759 06:36:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.759 06:36:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.759 06:36:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.759 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.018 06:36:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.018 06:36:11 -- common/autotest_common.sh@862 -- # return 0 00:05:58.018 06:36:11 -- event/cpu_locks.sh@159 -- # waitforlisten 58081 /var/tmp/spdk2.sock 00:05:58.018 06:36:11 -- common/autotest_common.sh@829 -- # '[' -z 58081 ']' 00:05:58.018 06:36:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.018 06:36:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.018 06:36:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.018 06:36:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.018 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.276 06:36:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.276 06:36:12 -- common/autotest_common.sh@862 -- # return 0 00:05:58.276 06:36:12 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.276 06:36:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.276 ************************************ 00:05:58.276 END TEST locking_overlapped_coremask_via_rpc 00:05:58.276 ************************************ 00:05:58.277 06:36:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.277 06:36:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.277 00:05:58.277 real 0m3.365s 00:05:58.277 user 0m1.538s 00:05:58.277 sys 0m0.268s 00:05:58.277 06:36:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.277 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.277 06:36:12 -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.277 06:36:12 -- event/cpu_locks.sh@15 -- # [[ -z 58049 ]] 00:05:58.277 06:36:12 -- event/cpu_locks.sh@15 -- # killprocess 58049 00:05:58.277 06:36:12 -- common/autotest_common.sh@936 -- # '[' -z 58049 ']' 00:05:58.277 06:36:12 -- common/autotest_common.sh@940 -- # kill -0 58049 00:05:58.277 06:36:12 -- common/autotest_common.sh@941 -- # uname 00:05:58.277 06:36:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.277 06:36:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58049 00:05:58.277 killing process with pid 58049 00:05:58.277 06:36:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.277 06:36:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.277 06:36:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58049' 00:05:58.277 06:36:12 -- common/autotest_common.sh@955 -- # kill 58049 00:05:58.277 06:36:12 -- common/autotest_common.sh@960 -- # wait 58049 00:05:58.844 06:36:12 -- event/cpu_locks.sh@16 -- # [[ -z 58081 ]] 00:05:58.844 06:36:12 -- event/cpu_locks.sh@16 -- # killprocess 58081 00:05:58.844 06:36:12 -- common/autotest_common.sh@936 -- # '[' -z 58081 ']' 00:05:58.844 06:36:12 -- common/autotest_common.sh@940 -- # kill -0 58081 00:05:58.844 06:36:12 -- common/autotest_common.sh@941 -- # uname 00:05:58.844 06:36:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.844 06:36:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58081 00:05:58.844 killing process with pid 58081 00:05:58.844 06:36:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:58.844 06:36:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:58.844 06:36:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58081' 00:05:58.844 06:36:12 -- common/autotest_common.sh@955 -- # kill 58081 00:05:58.844 06:36:12 -- common/autotest_common.sh@960 -- # wait 58081 00:05:59.780 06:36:13 -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.780 06:36:13 -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.780 06:36:13 -- event/cpu_locks.sh@15 -- # [[ -z 58049 ]] 00:05:59.780 06:36:13 -- event/cpu_locks.sh@15 -- # killprocess 58049 00:05:59.781 06:36:13 -- common/autotest_common.sh@936 -- # '[' -z 58049 ']' 00:05:59.781 06:36:13 -- common/autotest_common.sh@940 -- # kill -0 58049 00:05:59.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58049) - No such process 00:05:59.781 Process with pid 58049 is not found 00:05:59.781 06:36:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58049 is not found' 00:05:59.781 06:36:13 -- event/cpu_locks.sh@16 -- # [[ -z 58081 ]] 00:05:59.781 06:36:13 -- event/cpu_locks.sh@16 -- # killprocess 58081 00:05:59.781 06:36:13 -- common/autotest_common.sh@936 -- # '[' -z 58081 ']' 00:05:59.781 06:36:13 -- common/autotest_common.sh@940 -- # kill -0 58081 00:05:59.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58081) - No such process 00:05:59.781 Process with pid 58081 is not found 00:05:59.781 06:36:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58081 is not found' 00:05:59.781 06:36:13 -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.781 00:05:59.781 real 0m25.050s 00:05:59.781 user 0m44.150s 00:05:59.781 sys 0m6.597s 00:05:59.781 06:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.781 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 END TEST cpu_locks 00:05:59.781 ************************************ 00:05:59.781 00:05:59.781 real 0m53.964s 00:05:59.781 user 1m42.725s 00:05:59.781 sys 0m10.851s 00:05:59.781 06:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.781 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 END TEST event 00:05:59.781 ************************************ 00:05:59.781 06:36:13 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.781 06:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.781 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 START TEST thread 00:05:59.781 ************************************ 00:05:59.781 06:36:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.781 * Looking for test storage... 00:05:59.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:59.781 06:36:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.781 06:36:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.781 06:36:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.781 06:36:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.781 06:36:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.781 06:36:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.781 06:36:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.781 06:36:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.781 06:36:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.781 06:36:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.781 06:36:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.781 06:36:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.781 06:36:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.781 06:36:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.781 06:36:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.781 06:36:13 -- scripts/common.sh@344 -- # : 1 00:05:59.781 06:36:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.781 06:36:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.781 06:36:13 -- scripts/common.sh@364 -- # decimal 1 00:05:59.781 06:36:13 -- scripts/common.sh@352 -- # local d=1 00:05:59.781 06:36:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.781 06:36:13 -- scripts/common.sh@354 -- # echo 1 00:05:59.781 06:36:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.781 06:36:13 -- scripts/common.sh@365 -- # decimal 2 00:05:59.781 06:36:13 -- scripts/common.sh@352 -- # local d=2 00:05:59.781 06:36:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.781 06:36:13 -- scripts/common.sh@354 -- # echo 2 00:05:59.781 06:36:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.781 06:36:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.781 06:36:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.781 06:36:13 -- scripts/common.sh@367 -- # return 0 00:05:59.781 06:36:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.781 --rc genhtml_branch_coverage=1 00:05:59.781 --rc genhtml_function_coverage=1 00:05:59.781 --rc genhtml_legend=1 00:05:59.781 --rc geninfo_all_blocks=1 00:05:59.781 --rc geninfo_unexecuted_blocks=1 00:05:59.781 00:05:59.781 ' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.781 --rc genhtml_branch_coverage=1 00:05:59.781 --rc genhtml_function_coverage=1 00:05:59.781 --rc genhtml_legend=1 00:05:59.781 --rc geninfo_all_blocks=1 00:05:59.781 --rc geninfo_unexecuted_blocks=1 00:05:59.781 00:05:59.781 ' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.781 --rc genhtml_branch_coverage=1 00:05:59.781 --rc genhtml_function_coverage=1 00:05:59.781 --rc genhtml_legend=1 00:05:59.781 --rc geninfo_all_blocks=1 00:05:59.781 --rc geninfo_unexecuted_blocks=1 00:05:59.781 00:05:59.781 ' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.781 --rc genhtml_branch_coverage=1 00:05:59.781 --rc genhtml_function_coverage=1 00:05:59.781 --rc genhtml_legend=1 00:05:59.781 --rc geninfo_all_blocks=1 00:05:59.781 --rc geninfo_unexecuted_blocks=1 00:05:59.781 00:05:59.781 ' 00:05:59.781 06:36:13 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.781 06:36:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.781 06:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.781 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 START TEST thread_poller_perf 00:05:59.781 ************************************ 00:05:59.781 06:36:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.781 [2024-12-14 06:36:13.698866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.781 [2024-12-14 06:36:13.699654] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58253 ] 00:06:00.040 [2024-12-14 06:36:13.836902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.041 [2024-12-14 06:36:13.968881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.041 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.417 [2024-12-14T06:36:15.409Z] ====================================== 00:06:01.417 [2024-12-14T06:36:15.409Z] busy:2214265090 (cyc) 00:06:01.417 [2024-12-14T06:36:15.409Z] total_run_count: 304000 00:06:01.417 [2024-12-14T06:36:15.409Z] tsc_hz: 2200000000 (cyc) 00:06:01.417 [2024-12-14T06:36:15.409Z] ====================================== 00:06:01.417 [2024-12-14T06:36:15.409Z] poller_cost: 7283 (cyc), 3310 (nsec) 00:06:01.417 00:06:01.417 real 0m1.492s 00:06:01.417 user 0m1.299s 00:06:01.417 sys 0m0.082s 00:06:01.417 06:36:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.417 ************************************ 00:06:01.417 END TEST thread_poller_perf 00:06:01.417 06:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:01.417 ************************************ 00:06:01.417 06:36:15 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.417 06:36:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:01.417 06:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.417 06:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:01.417 ************************************ 00:06:01.417 START TEST thread_poller_perf 00:06:01.417 ************************************ 00:06:01.417 06:36:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.417 [2024-12-14 06:36:15.249044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.417 [2024-12-14 06:36:15.249171] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:06:01.417 [2024-12-14 06:36:15.388496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.676 [2024-12-14 06:36:15.560024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.676 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.055 [2024-12-14T06:36:17.047Z] ====================================== 00:06:03.055 [2024-12-14T06:36:17.047Z] busy:2203552174 (cyc) 00:06:03.055 [2024-12-14T06:36:17.047Z] total_run_count: 4109000 00:06:03.055 [2024-12-14T06:36:17.047Z] tsc_hz: 2200000000 (cyc) 00:06:03.055 [2024-12-14T06:36:17.047Z] ====================================== 00:06:03.055 [2024-12-14T06:36:17.047Z] poller_cost: 536 (cyc), 243 (nsec) 00:06:03.055 00:06:03.055 real 0m1.522s 00:06:03.055 user 0m1.329s 00:06:03.055 sys 0m0.082s 00:06:03.055 06:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.055 ************************************ 00:06:03.055 END TEST thread_poller_perf 00:06:03.055 ************************************ 00:06:03.055 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.055 06:36:16 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.055 00:06:03.055 real 0m3.289s 00:06:03.055 user 0m2.764s 00:06:03.055 sys 0m0.305s 00:06:03.055 06:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.055 ************************************ 00:06:03.055 END TEST thread 00:06:03.055 ************************************ 00:06:03.055 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.055 06:36:16 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:03.055 06:36:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.055 06:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.055 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.055 ************************************ 00:06:03.055 START TEST accel 00:06:03.055 ************************************ 00:06:03.055 06:36:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:03.055 * Looking for test storage... 00:06:03.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:03.055 06:36:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:03.055 06:36:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:03.055 06:36:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.315 06:36:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.315 06:36:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:03.315 06:36:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:03.315 06:36:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:03.315 06:36:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:03.315 06:36:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:03.315 06:36:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.315 06:36:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:03.315 06:36:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:03.315 06:36:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:03.315 06:36:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:03.315 06:36:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:03.315 06:36:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:03.315 06:36:17 -- scripts/common.sh@344 -- # : 1 00:06:03.315 06:36:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:03.315 06:36:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.315 06:36:17 -- scripts/common.sh@364 -- # decimal 1 00:06:03.315 06:36:17 -- scripts/common.sh@352 -- # local d=1 00:06:03.315 06:36:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.315 06:36:17 -- scripts/common.sh@354 -- # echo 1 00:06:03.315 06:36:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:03.315 06:36:17 -- scripts/common.sh@365 -- # decimal 2 00:06:03.315 06:36:17 -- scripts/common.sh@352 -- # local d=2 00:06:03.315 06:36:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.315 06:36:17 -- scripts/common.sh@354 -- # echo 2 00:06:03.315 06:36:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:03.315 06:36:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:03.315 06:36:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:03.315 06:36:17 -- scripts/common.sh@367 -- # return 0 00:06:03.315 06:36:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.315 06:36:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.315 --rc genhtml_branch_coverage=1 00:06:03.315 --rc genhtml_function_coverage=1 00:06:03.315 --rc genhtml_legend=1 00:06:03.315 --rc geninfo_all_blocks=1 00:06:03.315 --rc geninfo_unexecuted_blocks=1 00:06:03.315 00:06:03.315 ' 00:06:03.315 06:36:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.315 --rc genhtml_branch_coverage=1 00:06:03.315 --rc genhtml_function_coverage=1 00:06:03.315 --rc genhtml_legend=1 00:06:03.315 --rc geninfo_all_blocks=1 00:06:03.315 --rc geninfo_unexecuted_blocks=1 00:06:03.315 00:06:03.315 ' 00:06:03.315 06:36:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.315 --rc genhtml_branch_coverage=1 00:06:03.315 --rc genhtml_function_coverage=1 00:06:03.315 --rc genhtml_legend=1 00:06:03.315 --rc geninfo_all_blocks=1 00:06:03.315 --rc geninfo_unexecuted_blocks=1 00:06:03.315 00:06:03.315 ' 00:06:03.315 06:36:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.315 --rc genhtml_branch_coverage=1 00:06:03.315 --rc genhtml_function_coverage=1 00:06:03.315 --rc genhtml_legend=1 00:06:03.315 --rc geninfo_all_blocks=1 00:06:03.315 --rc geninfo_unexecuted_blocks=1 00:06:03.315 00:06:03.315 ' 00:06:03.315 06:36:17 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:03.315 06:36:17 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:03.316 06:36:17 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.316 06:36:17 -- accel/accel.sh@59 -- # spdk_tgt_pid=58376 00:06:03.316 06:36:17 -- accel/accel.sh@60 -- # waitforlisten 58376 00:06:03.316 06:36:17 -- common/autotest_common.sh@829 -- # '[' -z 58376 ']' 00:06:03.316 06:36:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.316 06:36:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.316 06:36:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.316 06:36:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.316 06:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.316 06:36:17 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:03.316 06:36:17 -- accel/accel.sh@58 -- # build_accel_config 00:06:03.316 06:36:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.316 06:36:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.316 06:36:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.316 06:36:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.316 06:36:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.316 06:36:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.316 06:36:17 -- accel/accel.sh@42 -- # jq -r . 00:06:03.316 [2024-12-14 06:36:17.148063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.316 [2024-12-14 06:36:17.148192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:06:03.316 [2024-12-14 06:36:17.288414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.575 [2024-12-14 06:36:17.467465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.575 [2024-12-14 06:36:17.467700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.537 06:36:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.537 06:36:18 -- common/autotest_common.sh@862 -- # return 0 00:06:04.537 06:36:18 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.537 06:36:18 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:04.537 06:36:18 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.537 06:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.537 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.537 06:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # IFS== 00:06:04.537 06:36:18 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.537 06:36:18 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.537 06:36:18 -- accel/accel.sh@67 -- # killprocess 58376 00:06:04.537 06:36:18 -- common/autotest_common.sh@936 -- # '[' -z 58376 ']' 00:06:04.537 06:36:18 -- common/autotest_common.sh@940 -- # kill -0 58376 00:06:04.537 06:36:18 -- common/autotest_common.sh@941 -- # uname 00:06:04.537 06:36:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.537 06:36:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58376 00:06:04.537 06:36:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.537 killing process with pid 58376 00:06:04.537 06:36:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.537 06:36:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58376' 00:06:04.537 06:36:18 -- common/autotest_common.sh@955 -- # kill 58376 00:06:04.537 06:36:18 -- common/autotest_common.sh@960 -- # wait 58376 00:06:05.104 06:36:19 -- accel/accel.sh@68 -- # trap - ERR 00:06:05.104 06:36:19 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:05.104 06:36:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:05.104 06:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.104 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.104 06:36:19 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:05.104 06:36:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.104 06:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.104 06:36:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.104 06:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.104 06:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.104 06:36:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.104 06:36:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.104 06:36:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.104 06:36:19 -- accel/accel.sh@42 -- # jq -r . 00:06:05.104 06:36:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.104 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.363 06:36:19 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.363 06:36:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.363 06:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.363 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.363 ************************************ 00:06:05.363 START TEST accel_missing_filename 00:06:05.363 ************************************ 00:06:05.363 06:36:19 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:05.363 06:36:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.363 06:36:19 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.363 06:36:19 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:05.363 06:36:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.363 06:36:19 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:05.363 06:36:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.363 06:36:19 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:05.363 06:36:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.363 06:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.363 06:36:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.363 06:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.363 06:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.363 06:36:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.363 06:36:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.363 06:36:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.363 06:36:19 -- accel/accel.sh@42 -- # jq -r . 00:06:05.363 [2024-12-14 06:36:19.136521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.363 [2024-12-14 06:36:19.136666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58451 ] 00:06:05.363 [2024-12-14 06:36:19.274852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.621 [2024-12-14 06:36:19.416185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.621 [2024-12-14 06:36:19.491094] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.621 [2024-12-14 06:36:19.602584] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:05.881 A filename is required. 00:06:05.881 06:36:19 -- common/autotest_common.sh@653 -- # es=234 00:06:05.881 06:36:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.881 06:36:19 -- common/autotest_common.sh@662 -- # es=106 00:06:05.881 06:36:19 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.881 06:36:19 -- common/autotest_common.sh@670 -- # es=1 00:06:05.881 06:36:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.881 00:06:05.881 real 0m0.655s 00:06:05.881 user 0m0.451s 00:06:05.881 sys 0m0.149s 00:06:05.881 06:36:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.881 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.881 ************************************ 00:06:05.881 END TEST accel_missing_filename 00:06:05.881 ************************************ 00:06:05.881 06:36:19 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.881 06:36:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:05.881 06:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.881 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.881 ************************************ 00:06:05.881 START TEST accel_compress_verify 00:06:05.881 ************************************ 00:06:05.881 06:36:19 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.881 06:36:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.881 06:36:19 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.881 06:36:19 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:05.881 06:36:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.881 06:36:19 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:05.881 06:36:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.881 06:36:19 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.881 06:36:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.881 06:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.881 06:36:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.881 06:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.881 06:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.881 06:36:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.881 06:36:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.881 06:36:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.881 06:36:19 -- accel/accel.sh@42 -- # jq -r . 00:06:05.881 [2024-12-14 06:36:19.844458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.881 [2024-12-14 06:36:19.844601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58475 ] 00:06:06.140 [2024-12-14 06:36:19.974197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.140 [2024-12-14 06:36:20.095421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.399 [2024-12-14 06:36:20.170588] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.399 [2024-12-14 06:36:20.279194] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.658 00:06:06.658 Compression does not support the verify option, aborting. 00:06:06.658 06:36:20 -- common/autotest_common.sh@653 -- # es=161 00:06:06.658 06:36:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.658 06:36:20 -- common/autotest_common.sh@662 -- # es=33 00:06:06.658 06:36:20 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:06.658 06:36:20 -- common/autotest_common.sh@670 -- # es=1 00:06:06.658 06:36:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.658 00:06:06.658 real 0m0.638s 00:06:06.658 user 0m0.430s 00:06:06.658 sys 0m0.141s 00:06:06.658 06:36:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.658 ************************************ 00:06:06.658 END TEST accel_compress_verify 00:06:06.658 ************************************ 00:06:06.658 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.658 06:36:20 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:06.658 06:36:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:06.658 06:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.658 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.658 ************************************ 00:06:06.658 START TEST accel_wrong_workload 00:06:06.658 ************************************ 00:06:06.658 06:36:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:06.658 06:36:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.658 06:36:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.658 06:36:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:06.658 06:36:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.658 06:36:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:06.658 06:36:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.658 06:36:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:06.658 06:36:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.658 06:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.658 06:36:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.658 06:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.658 06:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.658 06:36:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.658 06:36:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.658 06:36:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.658 06:36:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.658 Unsupported workload type: foobar 00:06:06.658 [2024-12-14 06:36:20.541672] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:06.658 accel_perf options: 00:06:06.658 [-h help message] 00:06:06.658 [-q queue depth per core] 00:06:06.658 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.658 [-T number of threads per core 00:06:06.658 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.658 [-t time in seconds] 00:06:06.658 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.658 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.658 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.658 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.658 [-S for crc32c workload, use this seed value (default 0) 00:06:06.659 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.659 [-f for fill workload, use this BYTE value (default 255) 00:06:06.659 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.659 [-y verify result if this switch is on] 00:06:06.659 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.659 Can be used to spread operations across a wider range of memory. 00:06:06.659 06:36:20 -- common/autotest_common.sh@653 -- # es=1 00:06:06.659 06:36:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.659 06:36:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.659 06:36:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.659 00:06:06.659 real 0m0.036s 00:06:06.659 user 0m0.017s 00:06:06.659 sys 0m0.018s 00:06:06.659 ************************************ 00:06:06.659 END TEST accel_wrong_workload 00:06:06.659 ************************************ 00:06:06.659 06:36:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.659 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.659 06:36:20 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.659 06:36:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:06.659 06:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.659 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.659 ************************************ 00:06:06.659 START TEST accel_negative_buffers 00:06:06.659 ************************************ 00:06:06.659 06:36:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.659 06:36:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.659 06:36:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:06.659 06:36:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:06.659 06:36:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.659 06:36:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:06.659 06:36:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.659 06:36:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:06.659 06:36:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:06.659 06:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.659 06:36:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.659 06:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.659 06:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.659 06:36:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.659 06:36:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.659 06:36:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.659 06:36:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.659 -x option must be non-negative. 00:06:06.659 [2024-12-14 06:36:20.628619] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:06.659 accel_perf options: 00:06:06.659 [-h help message] 00:06:06.659 [-q queue depth per core] 00:06:06.659 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.659 [-T number of threads per core 00:06:06.659 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.659 [-t time in seconds] 00:06:06.659 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.659 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.659 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.659 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.659 [-S for crc32c workload, use this seed value (default 0) 00:06:06.659 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.659 [-f for fill workload, use this BYTE value (default 255) 00:06:06.659 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.659 [-y verify result if this switch is on] 00:06:06.659 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.659 Can be used to spread operations across a wider range of memory. 00:06:06.659 06:36:20 -- common/autotest_common.sh@653 -- # es=1 00:06:06.659 06:36:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.659 06:36:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.659 06:36:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.659 00:06:06.659 real 0m0.033s 00:06:06.659 user 0m0.020s 00:06:06.659 sys 0m0.013s 00:06:06.659 06:36:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.659 ************************************ 00:06:06.659 END TEST accel_negative_buffers 00:06:06.659 ************************************ 00:06:06.659 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.918 06:36:20 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:06.918 06:36:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:06.918 06:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.918 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.918 ************************************ 00:06:06.918 START TEST accel_crc32c 00:06:06.918 ************************************ 00:06:06.918 06:36:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:06.918 06:36:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.918 06:36:20 -- accel/accel.sh@17 -- # local accel_module 00:06:06.918 06:36:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:06.918 06:36:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:06.918 06:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.918 06:36:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.918 06:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.918 06:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.918 06:36:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.918 06:36:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.918 06:36:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.918 06:36:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.918 [2024-12-14 06:36:20.723237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.918 [2024-12-14 06:36:20.723379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58539 ] 00:06:06.918 [2024-12-14 06:36:20.860496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.177 [2024-12-14 06:36:21.015079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.555 06:36:22 -- accel/accel.sh@18 -- # out=' 00:06:08.555 SPDK Configuration: 00:06:08.555 Core mask: 0x1 00:06:08.555 00:06:08.555 Accel Perf Configuration: 00:06:08.555 Workload Type: crc32c 00:06:08.555 CRC-32C seed: 32 00:06:08.555 Transfer size: 4096 bytes 00:06:08.555 Vector count 1 00:06:08.555 Module: software 00:06:08.555 Queue depth: 32 00:06:08.555 Allocate depth: 32 00:06:08.555 # threads/core: 1 00:06:08.555 Run time: 1 seconds 00:06:08.555 Verify: Yes 00:06:08.555 00:06:08.555 Running for 1 seconds... 00:06:08.555 00:06:08.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.555 ------------------------------------------------------------------------------------ 00:06:08.555 0,0 531360/s 2075 MiB/s 0 0 00:06:08.555 ==================================================================================== 00:06:08.555 Total 531360/s 2075 MiB/s 0 0' 00:06:08.555 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.555 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.555 06:36:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.555 06:36:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.555 06:36:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.555 06:36:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.555 06:36:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.555 06:36:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.555 06:36:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.555 06:36:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.555 06:36:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.555 06:36:22 -- accel/accel.sh@42 -- # jq -r . 00:06:08.555 [2024-12-14 06:36:22.440134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.555 [2024-12-14 06:36:22.440269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58564 ] 00:06:08.813 [2024-12-14 06:36:22.571029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.813 [2024-12-14 06:36:22.733299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.072 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.072 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.072 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.072 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.072 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.072 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=0x1 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=crc32c 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=32 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=software 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=32 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=32 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=1 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val=Yes 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.073 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.073 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.073 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.450 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.450 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.450 06:36:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.450 06:36:24 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:10.450 06:36:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.450 00:06:10.450 real 0m3.440s 00:06:10.450 user 0m2.896s 00:06:10.450 sys 0m0.343s 00:06:10.450 06:36:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.450 06:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.450 ************************************ 00:06:10.450 END TEST accel_crc32c 00:06:10.450 ************************************ 00:06:10.450 06:36:24 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:10.450 06:36:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.450 06:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.450 06:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.450 ************************************ 00:06:10.450 START TEST accel_crc32c_C2 00:06:10.450 ************************************ 00:06:10.450 06:36:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:10.450 06:36:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.450 06:36:24 -- accel/accel.sh@17 -- # local accel_module 00:06:10.450 06:36:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.450 06:36:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.450 06:36:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.450 06:36:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.450 06:36:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.450 06:36:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.450 06:36:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.450 06:36:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.450 06:36:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.450 06:36:24 -- accel/accel.sh@42 -- # jq -r . 00:06:10.450 [2024-12-14 06:36:24.214335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.450 [2024-12-14 06:36:24.214463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58599 ] 00:06:10.450 [2024-12-14 06:36:24.352335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.709 [2024-12-14 06:36:24.515462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.086 06:36:25 -- accel/accel.sh@18 -- # out=' 00:06:12.086 SPDK Configuration: 00:06:12.086 Core mask: 0x1 00:06:12.086 00:06:12.086 Accel Perf Configuration: 00:06:12.086 Workload Type: crc32c 00:06:12.086 CRC-32C seed: 0 00:06:12.086 Transfer size: 4096 bytes 00:06:12.086 Vector count 2 00:06:12.086 Module: software 00:06:12.086 Queue depth: 32 00:06:12.086 Allocate depth: 32 00:06:12.086 # threads/core: 1 00:06:12.086 Run time: 1 seconds 00:06:12.086 Verify: Yes 00:06:12.086 00:06:12.086 Running for 1 seconds... 00:06:12.086 00:06:12.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.086 ------------------------------------------------------------------------------------ 00:06:12.086 0,0 416864/s 3256 MiB/s 0 0 00:06:12.086 ==================================================================================== 00:06:12.086 Total 416864/s 1628 MiB/s 0 0' 00:06:12.086 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:12.086 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:12.086 06:36:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.086 06:36:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.086 06:36:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.086 06:36:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.086 06:36:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.086 06:36:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.086 06:36:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.086 06:36:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.086 06:36:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.086 06:36:25 -- accel/accel.sh@42 -- # jq -r . 00:06:12.086 [2024-12-14 06:36:25.899547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.086 [2024-12-14 06:36:25.899670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58618 ] 00:06:12.086 [2024-12-14 06:36:26.037833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.344 [2024-12-14 06:36:26.125669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.344 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.344 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.344 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=0x1 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=crc32c 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=0 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=software 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=32 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=32 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=1 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val=Yes 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.345 06:36:26 -- accel/accel.sh@21 -- # val= 00:06:12.345 06:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.345 06:36:26 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.721 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.721 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.721 06:36:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.721 06:36:27 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:13.721 06:36:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.721 00:06:13.721 real 0m3.342s 00:06:13.721 user 0m2.821s 00:06:13.721 sys 0m0.323s 00:06:13.721 06:36:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.721 06:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:13.721 ************************************ 00:06:13.721 END TEST accel_crc32c_C2 00:06:13.721 ************************************ 00:06:13.721 06:36:27 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:13.721 06:36:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.721 06:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.721 06:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:13.721 ************************************ 00:06:13.721 START TEST accel_copy 00:06:13.721 ************************************ 00:06:13.721 06:36:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:13.721 06:36:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.721 06:36:27 -- accel/accel.sh@17 -- # local accel_module 00:06:13.721 06:36:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:13.721 06:36:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.721 06:36:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.721 06:36:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.722 06:36:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.722 06:36:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.722 06:36:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.722 06:36:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.722 06:36:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.722 06:36:27 -- accel/accel.sh@42 -- # jq -r . 00:06:13.722 [2024-12-14 06:36:27.617575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.722 [2024-12-14 06:36:27.617680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:06:13.980 [2024-12-14 06:36:27.754558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.980 [2024-12-14 06:36:27.927919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.357 06:36:29 -- accel/accel.sh@18 -- # out=' 00:06:15.357 SPDK Configuration: 00:06:15.357 Core mask: 0x1 00:06:15.357 00:06:15.357 Accel Perf Configuration: 00:06:15.357 Workload Type: copy 00:06:15.357 Transfer size: 4096 bytes 00:06:15.357 Vector count 1 00:06:15.357 Module: software 00:06:15.357 Queue depth: 32 00:06:15.357 Allocate depth: 32 00:06:15.357 # threads/core: 1 00:06:15.357 Run time: 1 seconds 00:06:15.357 Verify: Yes 00:06:15.357 00:06:15.357 Running for 1 seconds... 00:06:15.357 00:06:15.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.357 ------------------------------------------------------------------------------------ 00:06:15.357 0,0 388960/s 1519 MiB/s 0 0 00:06:15.357 ==================================================================================== 00:06:15.357 Total 388960/s 1519 MiB/s 0 0' 00:06:15.357 06:36:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:15.357 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.357 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.357 06:36:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.357 06:36:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.357 06:36:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.357 06:36:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.357 06:36:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.357 06:36:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.357 06:36:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.357 06:36:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.357 06:36:29 -- accel/accel.sh@42 -- # jq -r . 00:06:15.357 [2024-12-14 06:36:29.276201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.357 [2024-12-14 06:36:29.276328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58678 ] 00:06:15.616 [2024-12-14 06:36:29.405634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.616 [2024-12-14 06:36:29.484131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=0x1 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=copy 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=software 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=32 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=32 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=1 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val=Yes 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.616 06:36:29 -- accel/accel.sh@21 -- # val= 00:06:15.616 06:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.616 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.993 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.993 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.993 06:36:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.993 06:36:30 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:16.993 06:36:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.993 00:06:16.993 real 0m3.208s 00:06:16.993 user 0m2.714s 00:06:16.994 sys 0m0.289s 00:06:16.994 06:36:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.994 06:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.994 ************************************ 00:06:16.994 END TEST accel_copy 00:06:16.994 ************************************ 00:06:16.994 06:36:30 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.994 06:36:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:16.994 06:36:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.994 06:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.994 ************************************ 00:06:16.994 START TEST accel_fill 00:06:16.994 ************************************ 00:06:16.994 06:36:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.994 06:36:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.994 06:36:30 -- accel/accel.sh@17 -- # local accel_module 00:06:16.994 06:36:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.994 06:36:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.994 06:36:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.994 06:36:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.994 06:36:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.994 06:36:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.994 06:36:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.994 06:36:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.994 06:36:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.994 06:36:30 -- accel/accel.sh@42 -- # jq -r . 00:06:16.994 [2024-12-14 06:36:30.874787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.994 [2024-12-14 06:36:30.874908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58712 ] 00:06:17.252 [2024-12-14 06:36:31.010214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.252 [2024-12-14 06:36:31.123618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.629 06:36:32 -- accel/accel.sh@18 -- # out=' 00:06:18.629 SPDK Configuration: 00:06:18.629 Core mask: 0x1 00:06:18.629 00:06:18.629 Accel Perf Configuration: 00:06:18.629 Workload Type: fill 00:06:18.629 Fill pattern: 0x80 00:06:18.629 Transfer size: 4096 bytes 00:06:18.629 Vector count 1 00:06:18.629 Module: software 00:06:18.629 Queue depth: 64 00:06:18.629 Allocate depth: 64 00:06:18.629 # threads/core: 1 00:06:18.629 Run time: 1 seconds 00:06:18.629 Verify: Yes 00:06:18.629 00:06:18.629 Running for 1 seconds... 00:06:18.629 00:06:18.629 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.629 ------------------------------------------------------------------------------------ 00:06:18.629 0,0 576640/s 2252 MiB/s 0 0 00:06:18.629 ==================================================================================== 00:06:18.629 Total 576640/s 2252 MiB/s 0 0' 00:06:18.629 06:36:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:18.629 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.629 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.629 06:36:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:18.629 06:36:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.629 06:36:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.629 06:36:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.629 06:36:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.629 06:36:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.629 06:36:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.629 06:36:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.629 06:36:32 -- accel/accel.sh@42 -- # jq -r . 00:06:18.629 [2024-12-14 06:36:32.447200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.629 [2024-12-14 06:36:32.447317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58732 ] 00:06:18.629 [2024-12-14 06:36:32.573610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.888 [2024-12-14 06:36:32.651504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=0x1 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=fill 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=0x80 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=software 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=64 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=64 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=1 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val=Yes 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.888 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.888 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.888 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@21 -- # val= 00:06:20.264 06:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.264 06:36:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.264 06:36:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.264 06:36:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:20.264 06:36:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.264 00:06:20.264 real 0m3.113s 00:06:20.264 user 0m2.654s 00:06:20.264 sys 0m0.259s 00:06:20.264 06:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.264 06:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 ************************************ 00:06:20.264 END TEST accel_fill 00:06:20.264 ************************************ 00:06:20.264 06:36:34 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:20.264 06:36:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.264 06:36:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.264 06:36:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 ************************************ 00:06:20.264 START TEST accel_copy_crc32c 00:06:20.264 ************************************ 00:06:20.264 06:36:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:20.264 06:36:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.264 06:36:34 -- accel/accel.sh@17 -- # local accel_module 00:06:20.264 06:36:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:20.264 06:36:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:20.264 06:36:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.264 06:36:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.264 06:36:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.264 06:36:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.264 06:36:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.264 06:36:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.264 06:36:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.264 06:36:34 -- accel/accel.sh@42 -- # jq -r . 00:06:20.264 [2024-12-14 06:36:34.040031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.264 [2024-12-14 06:36:34.040133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:06:20.264 [2024-12-14 06:36:34.177434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.523 [2024-12-14 06:36:34.256459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.900 06:36:35 -- accel/accel.sh@18 -- # out=' 00:06:21.900 SPDK Configuration: 00:06:21.900 Core mask: 0x1 00:06:21.900 00:06:21.900 Accel Perf Configuration: 00:06:21.900 Workload Type: copy_crc32c 00:06:21.900 CRC-32C seed: 0 00:06:21.900 Vector size: 4096 bytes 00:06:21.900 Transfer size: 4096 bytes 00:06:21.900 Vector count 1 00:06:21.900 Module: software 00:06:21.900 Queue depth: 32 00:06:21.900 Allocate depth: 32 00:06:21.900 # threads/core: 1 00:06:21.900 Run time: 1 seconds 00:06:21.900 Verify: Yes 00:06:21.900 00:06:21.900 Running for 1 seconds... 00:06:21.900 00:06:21.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.900 ------------------------------------------------------------------------------------ 00:06:21.900 0,0 312256/s 1219 MiB/s 0 0 00:06:21.900 ==================================================================================== 00:06:21.900 Total 312256/s 1219 MiB/s 0 0' 00:06:21.900 06:36:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.900 06:36:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.900 06:36:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.900 06:36:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.900 06:36:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.900 06:36:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.900 06:36:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.900 06:36:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.900 06:36:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.900 06:36:35 -- accel/accel.sh@42 -- # jq -r . 00:06:21.900 [2024-12-14 06:36:35.575634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.900 [2024-12-14 06:36:35.575748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58786 ] 00:06:21.900 [2024-12-14 06:36:35.704510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.900 [2024-12-14 06:36:35.783464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.900 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.900 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.900 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.900 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.900 06:36:35 -- accel/accel.sh@21 -- # val=0x1 00:06:21.900 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.900 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.900 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.900 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.900 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=0 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=software 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=32 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=32 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=1 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val=Yes 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.901 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.901 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.901 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.278 06:36:37 -- accel/accel.sh@21 -- # val= 00:06:23.278 06:36:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.278 06:36:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.279 06:36:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.279 06:36:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:23.279 06:36:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.279 00:06:23.279 real 0m3.079s 00:06:23.279 user 0m2.618s 00:06:23.279 sys 0m0.261s 00:06:23.279 06:36:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.279 06:36:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.279 ************************************ 00:06:23.279 END TEST accel_copy_crc32c 00:06:23.279 ************************************ 00:06:23.279 06:36:37 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.279 06:36:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.279 06:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.279 06:36:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.279 ************************************ 00:06:23.279 START TEST accel_copy_crc32c_C2 00:06:23.279 ************************************ 00:06:23.279 06:36:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.279 06:36:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.279 06:36:37 -- accel/accel.sh@17 -- # local accel_module 00:06:23.279 06:36:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:23.279 06:36:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:23.279 06:36:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.279 06:36:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.279 06:36:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.279 06:36:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.279 06:36:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.279 06:36:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.279 06:36:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.279 06:36:37 -- accel/accel.sh@42 -- # jq -r . 00:06:23.279 [2024-12-14 06:36:37.167991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.279 [2024-12-14 06:36:37.168088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ] 00:06:23.537 [2024-12-14 06:36:37.290296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.537 [2024-12-14 06:36:37.369331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.913 06:36:38 -- accel/accel.sh@18 -- # out=' 00:06:24.913 SPDK Configuration: 00:06:24.913 Core mask: 0x1 00:06:24.913 00:06:24.913 Accel Perf Configuration: 00:06:24.913 Workload Type: copy_crc32c 00:06:24.913 CRC-32C seed: 0 00:06:24.913 Vector size: 4096 bytes 00:06:24.913 Transfer size: 8192 bytes 00:06:24.913 Vector count 2 00:06:24.913 Module: software 00:06:24.913 Queue depth: 32 00:06:24.913 Allocate depth: 32 00:06:24.913 # threads/core: 1 00:06:24.913 Run time: 1 seconds 00:06:24.913 Verify: Yes 00:06:24.913 00:06:24.913 Running for 1 seconds... 00:06:24.913 00:06:24.913 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.913 ------------------------------------------------------------------------------------ 00:06:24.913 0,0 220416/s 1722 MiB/s 0 0 00:06:24.913 ==================================================================================== 00:06:24.913 Total 220416/s 861 MiB/s 0 0' 00:06:24.913 06:36:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:24.913 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.913 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.913 06:36:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:24.913 06:36:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.913 06:36:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.913 06:36:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.913 06:36:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.913 06:36:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.913 06:36:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.913 06:36:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.913 06:36:38 -- accel/accel.sh@42 -- # jq -r . 00:06:24.913 [2024-12-14 06:36:38.691943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.913 [2024-12-14 06:36:38.692104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58840 ] 00:06:24.913 [2024-12-14 06:36:38.820371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.913 [2024-12-14 06:36:38.900035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=0x1 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=0 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=software 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=32 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=32 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=1 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val=Yes 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.172 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:25.172 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.172 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@21 -- # val= 00:06:26.550 06:36:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.550 06:36:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.550 06:36:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.550 06:36:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:26.550 06:36:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.550 00:06:26.550 real 0m3.067s 00:06:26.550 user 0m2.622s 00:06:26.550 sys 0m0.249s 00:06:26.550 06:36:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.550 06:36:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.550 ************************************ 00:06:26.550 END TEST accel_copy_crc32c_C2 00:06:26.550 ************************************ 00:06:26.550 06:36:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:26.550 06:36:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.550 06:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.550 06:36:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.550 ************************************ 00:06:26.550 START TEST accel_dualcast 00:06:26.550 ************************************ 00:06:26.550 06:36:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:26.550 06:36:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.550 06:36:40 -- accel/accel.sh@17 -- # local accel_module 00:06:26.550 06:36:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:26.550 06:36:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:26.550 06:36:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.550 06:36:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.550 06:36:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.550 06:36:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.550 06:36:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.550 06:36:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.550 06:36:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.550 06:36:40 -- accel/accel.sh@42 -- # jq -r . 00:06:26.550 [2024-12-14 06:36:40.289636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.550 [2024-12-14 06:36:40.289729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:06:26.550 [2024-12-14 06:36:40.422294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.550 [2024-12-14 06:36:40.508144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.926 06:36:41 -- accel/accel.sh@18 -- # out=' 00:06:27.926 SPDK Configuration: 00:06:27.926 Core mask: 0x1 00:06:27.926 00:06:27.926 Accel Perf Configuration: 00:06:27.926 Workload Type: dualcast 00:06:27.926 Transfer size: 4096 bytes 00:06:27.926 Vector count 1 00:06:27.926 Module: software 00:06:27.926 Queue depth: 32 00:06:27.926 Allocate depth: 32 00:06:27.926 # threads/core: 1 00:06:27.926 Run time: 1 seconds 00:06:27.926 Verify: Yes 00:06:27.926 00:06:27.926 Running for 1 seconds... 00:06:27.926 00:06:27.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.926 ------------------------------------------------------------------------------------ 00:06:27.926 0,0 430656/s 1682 MiB/s 0 0 00:06:27.926 ==================================================================================== 00:06:27.926 Total 430656/s 1682 MiB/s 0 0' 00:06:27.926 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.926 06:36:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:27.926 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.926 06:36:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.926 06:36:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:27.926 06:36:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.926 06:36:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.926 06:36:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.926 06:36:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.927 06:36:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.927 06:36:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.927 06:36:41 -- accel/accel.sh@42 -- # jq -r . 00:06:27.927 [2024-12-14 06:36:41.829247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.927 [2024-12-14 06:36:41.829354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:06:28.185 [2024-12-14 06:36:41.960833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.185 [2024-12-14 06:36:42.034934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val=0x1 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val=dualcast 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.185 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.185 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.185 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val=software 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val=32 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val=32 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val=1 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val=Yes 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.186 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:28.186 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.186 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@21 -- # val= 00:06:29.579 06:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.579 06:36:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.579 06:36:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.579 06:36:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:29.579 06:36:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.579 00:06:29.579 real 0m3.081s 00:06:29.579 user 0m2.626s 00:06:29.579 sys 0m0.256s 00:06:29.579 06:36:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.579 06:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 END TEST accel_dualcast 00:06:29.579 ************************************ 00:06:29.579 06:36:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:29.579 06:36:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.579 06:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.579 06:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 START TEST accel_compare 00:06:29.579 ************************************ 00:06:29.580 06:36:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:29.580 06:36:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.580 06:36:43 -- accel/accel.sh@17 -- # local accel_module 00:06:29.580 06:36:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:29.580 06:36:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:29.580 06:36:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.580 06:36:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.580 06:36:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.580 06:36:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.580 06:36:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.580 06:36:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.580 06:36:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.580 06:36:43 -- accel/accel.sh@42 -- # jq -r . 00:06:29.580 [2024-12-14 06:36:43.421691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.580 [2024-12-14 06:36:43.421802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:06:29.580 [2024-12-14 06:36:43.544573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.850 [2024-12-14 06:36:43.629290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.227 06:36:44 -- accel/accel.sh@18 -- # out=' 00:06:31.227 SPDK Configuration: 00:06:31.227 Core mask: 0x1 00:06:31.227 00:06:31.227 Accel Perf Configuration: 00:06:31.227 Workload Type: compare 00:06:31.227 Transfer size: 4096 bytes 00:06:31.227 Vector count 1 00:06:31.227 Module: software 00:06:31.227 Queue depth: 32 00:06:31.227 Allocate depth: 32 00:06:31.227 # threads/core: 1 00:06:31.227 Run time: 1 seconds 00:06:31.227 Verify: Yes 00:06:31.227 00:06:31.227 Running for 1 seconds... 00:06:31.227 00:06:31.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.227 ------------------------------------------------------------------------------------ 00:06:31.227 0,0 569760/s 2225 MiB/s 0 0 00:06:31.227 ==================================================================================== 00:06:31.227 Total 569760/s 2225 MiB/s 0 0' 00:06:31.227 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.227 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.227 06:36:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:31.227 06:36:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.227 06:36:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:31.227 06:36:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.227 06:36:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.227 06:36:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.227 06:36:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.227 06:36:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.227 06:36:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.227 06:36:44 -- accel/accel.sh@42 -- # jq -r . 00:06:31.227 [2024-12-14 06:36:44.948436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.227 [2024-12-14 06:36:44.948535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:06:31.227 [2024-12-14 06:36:45.085444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.227 [2024-12-14 06:36:45.159902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=0x1 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=compare 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=software 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=32 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=32 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=1 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val=Yes 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.485 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.485 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.485 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@21 -- # val= 00:06:32.863 06:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.863 06:36:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.863 06:36:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.863 06:36:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:32.863 06:36:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.863 00:06:32.863 real 0m3.059s 00:06:32.863 user 0m2.601s 00:06:32.863 sys 0m0.261s 00:06:32.863 06:36:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.863 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.863 ************************************ 00:06:32.863 END TEST accel_compare 00:06:32.863 ************************************ 00:06:32.863 06:36:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:32.863 06:36:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:32.863 06:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.863 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.863 ************************************ 00:06:32.863 START TEST accel_xor 00:06:32.863 ************************************ 00:06:32.863 06:36:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:32.863 06:36:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.863 06:36:46 -- accel/accel.sh@17 -- # local accel_module 00:06:32.863 06:36:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:32.863 06:36:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:32.863 06:36:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.863 06:36:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.863 06:36:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.863 06:36:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.863 06:36:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.863 06:36:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.863 06:36:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.863 06:36:46 -- accel/accel.sh@42 -- # jq -r . 00:06:32.863 [2024-12-14 06:36:46.530855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.863 [2024-12-14 06:36:46.530972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:06:32.863 [2024-12-14 06:36:46.667202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.863 [2024-12-14 06:36:46.745597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.240 06:36:48 -- accel/accel.sh@18 -- # out=' 00:06:34.240 SPDK Configuration: 00:06:34.240 Core mask: 0x1 00:06:34.240 00:06:34.240 Accel Perf Configuration: 00:06:34.240 Workload Type: xor 00:06:34.240 Source buffers: 2 00:06:34.240 Transfer size: 4096 bytes 00:06:34.240 Vector count 1 00:06:34.240 Module: software 00:06:34.240 Queue depth: 32 00:06:34.240 Allocate depth: 32 00:06:34.240 # threads/core: 1 00:06:34.240 Run time: 1 seconds 00:06:34.240 Verify: Yes 00:06:34.240 00:06:34.240 Running for 1 seconds... 00:06:34.240 00:06:34.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.240 ------------------------------------------------------------------------------------ 00:06:34.240 0,0 266432/s 1040 MiB/s 0 0 00:06:34.240 ==================================================================================== 00:06:34.240 Total 266432/s 1040 MiB/s 0 0' 00:06:34.240 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.240 06:36:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:34.240 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.240 06:36:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.240 06:36:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:34.240 06:36:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.240 06:36:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.240 06:36:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.240 06:36:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.240 06:36:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.240 06:36:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.240 06:36:48 -- accel/accel.sh@42 -- # jq -r . 00:06:34.240 [2024-12-14 06:36:48.064194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.240 [2024-12-14 06:36:48.064290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:06:34.240 [2024-12-14 06:36:48.195834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.498 [2024-12-14 06:36:48.271661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.498 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.498 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.498 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.498 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.498 06:36:48 -- accel/accel.sh@21 -- # val=0x1 00:06:34.498 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.498 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.498 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.498 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=xor 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=2 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=software 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=32 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=32 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=1 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val=Yes 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 06:36:48 -- accel/accel.sh@21 -- # val= 00:06:34.499 06:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 06:36:48 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@21 -- # val= 00:06:35.876 06:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.876 ************************************ 00:06:35.876 END TEST accel_xor 00:06:35.876 ************************************ 00:06:35.876 06:36:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.876 06:36:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.876 06:36:49 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:35.876 06:36:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.876 00:06:35.876 real 0m3.061s 00:06:35.876 user 0m2.598s 00:06:35.876 sys 0m0.263s 00:06:35.876 06:36:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.876 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.876 06:36:49 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:35.876 06:36:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.876 06:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.876 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.876 ************************************ 00:06:35.876 START TEST accel_xor 00:06:35.876 ************************************ 00:06:35.876 06:36:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:35.876 06:36:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.876 06:36:49 -- accel/accel.sh@17 -- # local accel_module 00:06:35.876 06:36:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.876 06:36:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.876 06:36:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.876 06:36:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.876 06:36:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.876 06:36:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.876 06:36:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.876 06:36:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.876 06:36:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.876 06:36:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.876 [2024-12-14 06:36:49.651025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.876 [2024-12-14 06:36:49.651201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:06:35.876 [2024-12-14 06:36:49.780928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.876 [2024-12-14 06:36:49.859653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.252 06:36:51 -- accel/accel.sh@18 -- # out=' 00:06:37.252 SPDK Configuration: 00:06:37.252 Core mask: 0x1 00:06:37.252 00:06:37.252 Accel Perf Configuration: 00:06:37.252 Workload Type: xor 00:06:37.252 Source buffers: 3 00:06:37.252 Transfer size: 4096 bytes 00:06:37.252 Vector count 1 00:06:37.252 Module: software 00:06:37.252 Queue depth: 32 00:06:37.252 Allocate depth: 32 00:06:37.252 # threads/core: 1 00:06:37.252 Run time: 1 seconds 00:06:37.252 Verify: Yes 00:06:37.252 00:06:37.252 Running for 1 seconds... 00:06:37.252 00:06:37.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.252 ------------------------------------------------------------------------------------ 00:06:37.252 0,0 255008/s 996 MiB/s 0 0 00:06:37.252 ==================================================================================== 00:06:37.252 Total 255008/s 996 MiB/s 0 0' 00:06:37.252 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.252 06:36:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:37.252 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.252 06:36:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:37.252 06:36:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.252 06:36:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.252 06:36:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.252 06:36:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.252 06:36:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.252 06:36:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.252 06:36:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.252 06:36:51 -- accel/accel.sh@42 -- # jq -r . 00:06:37.252 [2024-12-14 06:36:51.167888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.252 [2024-12-14 06:36:51.168011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:06:37.511 [2024-12-14 06:36:51.295730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.511 [2024-12-14 06:36:51.375894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.511 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.511 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.511 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.511 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.511 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.511 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=0x1 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=xor 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=3 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=software 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=32 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=32 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=1 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val=Yes 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:37.512 06:36:51 -- accel/accel.sh@21 -- # val= 00:06:37.512 06:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # IFS=: 00:06:37.512 06:36:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@21 -- # val= 00:06:38.918 06:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.918 06:36:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.918 06:36:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.918 06:36:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:38.918 06:36:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.918 00:06:38.918 real 0m3.044s 00:06:38.918 user 0m2.583s 00:06:38.918 sys 0m0.263s 00:06:38.918 06:36:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.918 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 ************************************ 00:06:38.918 END TEST accel_xor 00:06:38.918 ************************************ 00:06:38.918 06:36:52 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:38.918 06:36:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:38.918 06:36:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.918 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 ************************************ 00:06:38.918 START TEST accel_dif_verify 00:06:38.918 ************************************ 00:06:38.918 06:36:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:38.918 06:36:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.918 06:36:52 -- accel/accel.sh@17 -- # local accel_module 00:06:38.918 06:36:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:38.918 06:36:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:38.918 06:36:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.918 06:36:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.918 06:36:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.918 06:36:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.918 06:36:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.918 06:36:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.918 06:36:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.918 06:36:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.918 [2024-12-14 06:36:52.749834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.918 [2024-12-14 06:36:52.749912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:06:38.918 [2024-12-14 06:36:52.882595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.176 [2024-12-14 06:36:52.960793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.550 06:36:54 -- accel/accel.sh@18 -- # out=' 00:06:40.550 SPDK Configuration: 00:06:40.550 Core mask: 0x1 00:06:40.550 00:06:40.550 Accel Perf Configuration: 00:06:40.550 Workload Type: dif_verify 00:06:40.550 Vector size: 4096 bytes 00:06:40.550 Transfer size: 4096 bytes 00:06:40.550 Block size: 512 bytes 00:06:40.550 Metadata size: 8 bytes 00:06:40.550 Vector count 1 00:06:40.550 Module: software 00:06:40.550 Queue depth: 32 00:06:40.550 Allocate depth: 32 00:06:40.550 # threads/core: 1 00:06:40.550 Run time: 1 seconds 00:06:40.550 Verify: No 00:06:40.550 00:06:40.550 Running for 1 seconds... 00:06:40.550 00:06:40.550 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.550 ------------------------------------------------------------------------------------ 00:06:40.550 0,0 125728/s 498 MiB/s 0 0 00:06:40.550 ==================================================================================== 00:06:40.550 Total 125728/s 491 MiB/s 0 0' 00:06:40.550 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 06:36:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:40.550 06:36:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.550 06:36:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:40.550 06:36:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.550 06:36:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.550 06:36:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.550 06:36:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.550 06:36:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.550 06:36:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.550 06:36:54 -- accel/accel.sh@42 -- # jq -r . 00:06:40.550 [2024-12-14 06:36:54.279971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.550 [2024-12-14 06:36:54.280102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:06:40.550 [2024-12-14 06:36:54.408355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.550 [2024-12-14 06:36:54.486242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=0x1 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=dif_verify 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=software 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=32 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=32 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=1 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val=No 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.809 06:36:54 -- accel/accel.sh@21 -- # val= 00:06:40.809 06:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.809 06:36:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.183 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.183 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.183 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.183 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.183 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.183 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.183 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.183 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.183 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.184 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.184 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.184 06:36:55 -- accel/accel.sh@21 -- # val= 00:06:42.184 06:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.184 06:36:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.184 06:36:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.184 06:36:55 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:42.184 ************************************ 00:06:42.184 END TEST accel_dif_verify 00:06:42.184 ************************************ 00:06:42.184 06:36:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.184 00:06:42.184 real 0m3.055s 00:06:42.184 user 0m2.596s 00:06:42.184 sys 0m0.261s 00:06:42.184 06:36:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.184 06:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.184 06:36:55 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:42.184 06:36:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:42.184 06:36:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.184 06:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.184 ************************************ 00:06:42.184 START TEST accel_dif_generate 00:06:42.184 ************************************ 00:06:42.184 06:36:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:42.184 06:36:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.184 06:36:55 -- accel/accel.sh@17 -- # local accel_module 00:06:42.184 06:36:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:42.184 06:36:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.184 06:36:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.184 06:36:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.184 06:36:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.184 06:36:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.184 06:36:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.184 06:36:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.184 06:36:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.184 06:36:55 -- accel/accel.sh@42 -- # jq -r . 00:06:42.184 [2024-12-14 06:36:55.867996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.184 [2024-12-14 06:36:55.868089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:06:42.184 [2024-12-14 06:36:56.006447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.184 [2024-12-14 06:36:56.085656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.558 06:36:57 -- accel/accel.sh@18 -- # out=' 00:06:43.558 SPDK Configuration: 00:06:43.558 Core mask: 0x1 00:06:43.558 00:06:43.558 Accel Perf Configuration: 00:06:43.558 Workload Type: dif_generate 00:06:43.558 Vector size: 4096 bytes 00:06:43.558 Transfer size: 4096 bytes 00:06:43.558 Block size: 512 bytes 00:06:43.558 Metadata size: 8 bytes 00:06:43.558 Vector count 1 00:06:43.558 Module: software 00:06:43.558 Queue depth: 32 00:06:43.558 Allocate depth: 32 00:06:43.558 # threads/core: 1 00:06:43.558 Run time: 1 seconds 00:06:43.558 Verify: No 00:06:43.558 00:06:43.558 Running for 1 seconds... 00:06:43.558 00:06:43.558 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.558 ------------------------------------------------------------------------------------ 00:06:43.558 0,0 154048/s 611 MiB/s 0 0 00:06:43.558 ==================================================================================== 00:06:43.558 Total 154048/s 601 MiB/s 0 0' 00:06:43.558 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.558 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.559 06:36:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:43.559 06:36:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.559 06:36:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:43.559 06:36:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.559 06:36:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.559 06:36:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.559 06:36:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.559 06:36:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.559 06:36:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.559 06:36:57 -- accel/accel.sh@42 -- # jq -r . 00:06:43.559 [2024-12-14 06:36:57.404905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.559 [2024-12-14 06:36:57.405087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:06:43.559 [2024-12-14 06:36:57.537751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.817 [2024-12-14 06:36:57.616218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val=0x1 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val=dif_generate 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.817 06:36:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:43.817 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.817 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val=software 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val=32 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val=32 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val=1 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val=No 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.818 06:36:57 -- accel/accel.sh@21 -- # val= 00:06:43.818 06:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.818 06:36:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 ************************************ 00:06:45.194 END TEST accel_dif_generate 00:06:45.194 ************************************ 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@21 -- # val= 00:06:45.194 06:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.194 06:36:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.194 06:36:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.194 06:36:58 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:45.194 06:36:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.194 00:06:45.194 real 0m3.087s 00:06:45.194 user 0m2.623s 00:06:45.194 sys 0m0.266s 00:06:45.194 06:36:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.194 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 06:36:58 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:45.194 06:36:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:45.194 06:36:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.194 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 ************************************ 00:06:45.194 START TEST accel_dif_generate_copy 00:06:45.194 ************************************ 00:06:45.194 06:36:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:45.194 06:36:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.194 06:36:58 -- accel/accel.sh@17 -- # local accel_module 00:06:45.194 06:36:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:45.194 06:36:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:45.194 06:36:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.194 06:36:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.194 06:36:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.194 06:36:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.194 06:36:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.194 06:36:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.194 06:36:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.194 06:36:58 -- accel/accel.sh@42 -- # jq -r . 00:06:45.194 [2024-12-14 06:36:59.001980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.194 [2024-12-14 06:36:59.002080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59202 ] 00:06:45.194 [2024-12-14 06:36:59.131042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.453 [2024-12-14 06:36:59.217354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.827 06:37:00 -- accel/accel.sh@18 -- # out=' 00:06:46.827 SPDK Configuration: 00:06:46.827 Core mask: 0x1 00:06:46.827 00:06:46.827 Accel Perf Configuration: 00:06:46.827 Workload Type: dif_generate_copy 00:06:46.827 Vector size: 4096 bytes 00:06:46.827 Transfer size: 4096 bytes 00:06:46.827 Vector count 1 00:06:46.827 Module: software 00:06:46.827 Queue depth: 32 00:06:46.827 Allocate depth: 32 00:06:46.827 # threads/core: 1 00:06:46.827 Run time: 1 seconds 00:06:46.827 Verify: No 00:06:46.827 00:06:46.827 Running for 1 seconds... 00:06:46.827 00:06:46.827 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.827 ------------------------------------------------------------------------------------ 00:06:46.827 0,0 112864/s 447 MiB/s 0 0 00:06:46.827 ==================================================================================== 00:06:46.827 Total 112864/s 440 MiB/s 0 0' 00:06:46.827 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.827 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.827 06:37:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.827 06:37:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.827 06:37:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.827 06:37:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.827 06:37:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.827 06:37:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.827 06:37:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.827 06:37:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.827 06:37:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.827 06:37:00 -- accel/accel.sh@42 -- # jq -r . 00:06:46.827 [2024-12-14 06:37:00.543305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.827 [2024-12-14 06:37:00.543404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59221 ] 00:06:46.827 [2024-12-14 06:37:00.679434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.827 [2024-12-14 06:37:00.775798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=0x1 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=software 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=32 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=32 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=1 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val=No 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.086 06:37:00 -- accel/accel.sh@21 -- # val= 00:06:47.086 06:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.086 06:37:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@21 -- # val= 00:06:48.470 06:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.470 06:37:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.470 06:37:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.470 06:37:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:48.470 06:37:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.470 00:06:48.470 real 0m3.132s 00:06:48.470 user 0m2.664s 00:06:48.470 sys 0m0.268s 00:06:48.470 06:37:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.470 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.470 ************************************ 00:06:48.470 END TEST accel_dif_generate_copy 00:06:48.470 ************************************ 00:06:48.470 06:37:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:48.470 06:37:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.470 06:37:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:48.470 06:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.470 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.470 ************************************ 00:06:48.470 START TEST accel_comp 00:06:48.470 ************************************ 00:06:48.470 06:37:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.470 06:37:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.470 06:37:02 -- accel/accel.sh@17 -- # local accel_module 00:06:48.470 06:37:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.470 06:37:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.470 06:37:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.470 06:37:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.470 06:37:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.470 06:37:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.470 06:37:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.470 06:37:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.470 06:37:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.470 06:37:02 -- accel/accel.sh@42 -- # jq -r . 00:06:48.470 [2024-12-14 06:37:02.197129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.470 [2024-12-14 06:37:02.197243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:06:48.470 [2024-12-14 06:37:02.333545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.729 [2024-12-14 06:37:02.475693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.104 06:37:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.104 00:06:50.104 SPDK Configuration: 00:06:50.104 Core mask: 0x1 00:06:50.104 00:06:50.104 Accel Perf Configuration: 00:06:50.104 Workload Type: compress 00:06:50.104 Transfer size: 4096 bytes 00:06:50.104 Vector count 1 00:06:50.104 Module: software 00:06:50.104 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.104 Queue depth: 32 00:06:50.104 Allocate depth: 32 00:06:50.104 # threads/core: 1 00:06:50.104 Run time: 1 seconds 00:06:50.104 Verify: No 00:06:50.104 00:06:50.104 Running for 1 seconds... 00:06:50.104 00:06:50.104 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.105 ------------------------------------------------------------------------------------ 00:06:50.105 0,0 57728/s 240 MiB/s 0 0 00:06:50.105 ==================================================================================== 00:06:50.105 Total 57728/s 225 MiB/s 0 0' 00:06:50.105 06:37:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.105 06:37:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.105 06:37:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.105 06:37:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.105 06:37:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.105 06:37:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.105 06:37:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.105 06:37:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.105 06:37:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.105 06:37:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.105 06:37:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.105 06:37:03 -- accel/accel.sh@42 -- # jq -r . 00:06:50.105 [2024-12-14 06:37:03.835228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.105 [2024-12-14 06:37:03.835332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ] 00:06:50.105 [2024-12-14 06:37:03.971555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.105 [2024-12-14 06:37:04.088887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.363 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.363 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.363 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.363 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.363 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.363 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.363 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.363 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=0x1 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=compress 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=software 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=32 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=32 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=1 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val=No 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.364 06:37:04 -- accel/accel.sh@21 -- # val= 00:06:50.364 06:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.364 06:37:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@21 -- # val= 00:06:51.769 06:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.769 06:37:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.769 06:37:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.769 06:37:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:51.769 06:37:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.769 00:06:51.769 real 0m3.263s 00:06:51.769 user 0m2.768s 00:06:51.769 sys 0m0.290s 00:06:51.769 06:37:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.769 ************************************ 00:06:51.769 END TEST accel_comp 00:06:51.769 ************************************ 00:06:51.769 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 06:37:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.769 06:37:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.769 06:37:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.769 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 ************************************ 00:06:51.769 START TEST accel_decomp 00:06:51.769 ************************************ 00:06:51.769 06:37:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.769 06:37:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.769 06:37:05 -- accel/accel.sh@17 -- # local accel_module 00:06:51.769 06:37:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.769 06:37:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.769 06:37:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.769 06:37:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.769 06:37:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.769 06:37:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.769 06:37:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.769 06:37:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.769 06:37:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.769 06:37:05 -- accel/accel.sh@42 -- # jq -r . 00:06:51.769 [2024-12-14 06:37:05.512671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.769 [2024-12-14 06:37:05.512767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:06:51.769 [2024-12-14 06:37:05.648432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.028 [2024-12-14 06:37:05.789510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.404 06:37:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.404 00:06:53.404 SPDK Configuration: 00:06:53.404 Core mask: 0x1 00:06:53.404 00:06:53.404 Accel Perf Configuration: 00:06:53.404 Workload Type: decompress 00:06:53.404 Transfer size: 4096 bytes 00:06:53.404 Vector count 1 00:06:53.404 Module: software 00:06:53.404 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.404 Queue depth: 32 00:06:53.404 Allocate depth: 32 00:06:53.404 # threads/core: 1 00:06:53.404 Run time: 1 seconds 00:06:53.404 Verify: Yes 00:06:53.404 00:06:53.404 Running for 1 seconds... 00:06:53.404 00:06:53.404 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.404 ------------------------------------------------------------------------------------ 00:06:53.404 0,0 82688/s 152 MiB/s 0 0 00:06:53.404 ==================================================================================== 00:06:53.404 Total 82688/s 323 MiB/s 0 0' 00:06:53.405 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.405 06:37:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.405 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.405 06:37:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.405 06:37:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.405 06:37:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.405 06:37:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.405 06:37:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.405 06:37:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.405 06:37:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.405 06:37:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.405 06:37:07 -- accel/accel.sh@42 -- # jq -r . 00:06:53.405 [2024-12-14 06:37:07.151412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.405 [2024-12-14 06:37:07.151542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:06:53.405 [2024-12-14 06:37:07.281349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.663 [2024-12-14 06:37:07.417427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.663 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.663 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.663 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.663 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.663 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.663 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.663 06:37:07 -- accel/accel.sh@21 -- # val=0x1 00:06:53.663 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.663 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.663 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=decompress 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=software 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=32 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=32 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=1 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val=Yes 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.664 06:37:07 -- accel/accel.sh@21 -- # val= 00:06:53.664 06:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.664 06:37:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.040 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.040 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.041 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.041 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.041 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.041 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@21 -- # val= 00:06:55.041 06:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.041 06:37:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.041 06:37:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.041 06:37:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.041 06:37:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.041 00:06:55.041 real 0m3.236s 00:06:55.041 user 0m2.757s 00:06:55.041 sys 0m0.277s 00:06:55.041 ************************************ 00:06:55.041 END TEST accel_decomp 00:06:55.041 ************************************ 00:06:55.041 06:37:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.041 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 06:37:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.041 06:37:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:55.041 06:37:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.041 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 ************************************ 00:06:55.041 START TEST accel_decmop_full 00:06:55.041 ************************************ 00:06:55.041 06:37:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.041 06:37:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.041 06:37:08 -- accel/accel.sh@17 -- # local accel_module 00:06:55.041 06:37:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.041 06:37:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.041 06:37:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.041 06:37:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.041 06:37:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.041 06:37:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.041 06:37:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.041 06:37:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.041 06:37:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.041 06:37:08 -- accel/accel.sh@42 -- # jq -r . 00:06:55.041 [2024-12-14 06:37:08.800451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.041 [2024-12-14 06:37:08.800552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59373 ] 00:06:55.041 [2024-12-14 06:37:08.928357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.041 [2024-12-14 06:37:09.005867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.411 06:37:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.411 00:06:56.411 SPDK Configuration: 00:06:56.411 Core mask: 0x1 00:06:56.411 00:06:56.411 Accel Perf Configuration: 00:06:56.411 Workload Type: decompress 00:06:56.411 Transfer size: 111250 bytes 00:06:56.411 Vector count 1 00:06:56.411 Module: software 00:06:56.411 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.411 Queue depth: 32 00:06:56.411 Allocate depth: 32 00:06:56.411 # threads/core: 1 00:06:56.411 Run time: 1 seconds 00:06:56.411 Verify: Yes 00:06:56.411 00:06:56.411 Running for 1 seconds... 00:06:56.411 00:06:56.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.411 ------------------------------------------------------------------------------------ 00:06:56.411 0,0 5664/s 233 MiB/s 0 0 00:06:56.411 ==================================================================================== 00:06:56.411 Total 5664/s 600 MiB/s 0 0' 00:06:56.411 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.411 06:37:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.411 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.411 06:37:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.411 06:37:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.411 06:37:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.411 06:37:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.411 06:37:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.411 06:37:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.411 06:37:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.411 06:37:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.411 06:37:10 -- accel/accel.sh@42 -- # jq -r . 00:06:56.411 [2024-12-14 06:37:10.340077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.411 [2024-12-14 06:37:10.340178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59393 ] 00:06:56.671 [2024-12-14 06:37:10.476029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.671 [2024-12-14 06:37:10.559570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=0x1 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=decompress 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=software 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=32 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=32 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=1 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val=Yes 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.671 06:37:10 -- accel/accel.sh@21 -- # val= 00:06:56.671 06:37:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.671 06:37:10 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@21 -- # val= 00:06:58.046 06:37:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.046 06:37:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.046 06:37:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.046 06:37:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:58.047 06:37:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.047 00:06:58.047 real 0m3.112s 00:06:58.047 user 0m2.643s 00:06:58.047 sys 0m0.266s 00:06:58.047 06:37:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.047 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.047 ************************************ 00:06:58.047 END TEST accel_decmop_full 00:06:58.047 ************************************ 00:06:58.047 06:37:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:58.047 06:37:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:58.047 06:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.047 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.047 ************************************ 00:06:58.047 START TEST accel_decomp_mcore 00:06:58.047 ************************************ 00:06:58.047 06:37:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:58.047 06:37:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.047 06:37:11 -- accel/accel.sh@17 -- # local accel_module 00:06:58.047 06:37:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:58.047 06:37:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:58.047 06:37:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.047 06:37:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.047 06:37:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.047 06:37:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.047 06:37:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.047 06:37:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.047 06:37:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.047 06:37:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.047 [2024-12-14 06:37:11.973340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.047 [2024-12-14 06:37:11.973436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:06:58.305 [2024-12-14 06:37:12.107809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.305 [2024-12-14 06:37:12.257480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.305 [2024-12-14 06:37:12.257615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.305 [2024-12-14 06:37:12.257749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.305 [2024-12-14 06:37:12.258105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.681 06:37:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.681 00:06:59.681 SPDK Configuration: 00:06:59.681 Core mask: 0xf 00:06:59.681 00:06:59.681 Accel Perf Configuration: 00:06:59.681 Workload Type: decompress 00:06:59.681 Transfer size: 4096 bytes 00:06:59.681 Vector count 1 00:06:59.681 Module: software 00:06:59.681 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.681 Queue depth: 32 00:06:59.681 Allocate depth: 32 00:06:59.681 # threads/core: 1 00:06:59.681 Run time: 1 seconds 00:06:59.681 Verify: Yes 00:06:59.681 00:06:59.681 Running for 1 seconds... 00:06:59.681 00:06:59.681 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.681 ------------------------------------------------------------------------------------ 00:06:59.681 0,0 65344/s 120 MiB/s 0 0 00:06:59.681 3,0 62656/s 115 MiB/s 0 0 00:06:59.681 2,0 65952/s 121 MiB/s 0 0 00:06:59.681 1,0 61536/s 113 MiB/s 0 0 00:06:59.681 ==================================================================================== 00:06:59.681 Total 255488/s 998 MiB/s 0 0' 00:06:59.681 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.681 06:37:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.681 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.681 06:37:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.681 06:37:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.681 06:37:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.681 06:37:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.681 06:37:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.681 06:37:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.681 06:37:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.681 06:37:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.681 06:37:13 -- accel/accel.sh@42 -- # jq -r . 00:06:59.681 [2024-12-14 06:37:13.620491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.681 [2024-12-14 06:37:13.620587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:06:59.940 [2024-12-14 06:37:13.749586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.940 [2024-12-14 06:37:13.832795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.940 [2024-12-14 06:37:13.832932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.940 [2024-12-14 06:37:13.833051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.940 [2024-12-14 06:37:13.833422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=0xf 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=decompress 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val= 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=software 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=32 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=32 00:06:59.940 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.940 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.940 06:37:13 -- accel/accel.sh@21 -- # val=1 00:06:59.941 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.941 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.941 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.200 06:37:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.200 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.200 06:37:13 -- accel/accel.sh@21 -- # val=Yes 00:07:00.200 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.200 06:37:13 -- accel/accel.sh@21 -- # val= 00:07:00.200 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.200 06:37:13 -- accel/accel.sh@21 -- # val= 00:07:00.200 06:37:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.200 06:37:13 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@21 -- # val= 00:07:01.573 06:37:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # IFS=: 00:07:01.573 06:37:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.573 06:37:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.573 ************************************ 00:07:01.573 END TEST accel_decomp_mcore 00:07:01.573 ************************************ 00:07:01.573 06:37:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:01.573 06:37:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.573 00:07:01.573 real 0m3.220s 00:07:01.573 user 0m9.889s 00:07:01.573 sys 0m0.325s 00:07:01.573 06:37:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.573 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.573 06:37:15 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.573 06:37:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:01.573 06:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.573 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.573 ************************************ 00:07:01.573 START TEST accel_decomp_full_mcore 00:07:01.573 ************************************ 00:07:01.573 06:37:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.573 06:37:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.573 06:37:15 -- accel/accel.sh@17 -- # local accel_module 00:07:01.573 06:37:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.573 06:37:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.573 06:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.573 06:37:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.573 06:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.573 06:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.573 06:37:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.573 06:37:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.573 06:37:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.573 06:37:15 -- accel/accel.sh@42 -- # jq -r . 00:07:01.573 [2024-12-14 06:37:15.249161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.573 [2024-12-14 06:37:15.249258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:07:01.573 [2024-12-14 06:37:15.390209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.573 [2024-12-14 06:37:15.499113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.573 [2024-12-14 06:37:15.499255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.573 [2024-12-14 06:37:15.499365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.573 [2024-12-14 06:37:15.499370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.949 06:37:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.949 00:07:02.949 SPDK Configuration: 00:07:02.949 Core mask: 0xf 00:07:02.949 00:07:02.949 Accel Perf Configuration: 00:07:02.949 Workload Type: decompress 00:07:02.949 Transfer size: 111250 bytes 00:07:02.949 Vector count 1 00:07:02.949 Module: software 00:07:02.949 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.949 Queue depth: 32 00:07:02.949 Allocate depth: 32 00:07:02.949 # threads/core: 1 00:07:02.949 Run time: 1 seconds 00:07:02.949 Verify: Yes 00:07:02.949 00:07:02.949 Running for 1 seconds... 00:07:02.949 00:07:02.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.949 ------------------------------------------------------------------------------------ 00:07:02.949 0,0 5056/s 208 MiB/s 0 0 00:07:02.949 3,0 4992/s 206 MiB/s 0 0 00:07:02.949 2,0 5024/s 207 MiB/s 0 0 00:07:02.949 1,0 4992/s 206 MiB/s 0 0 00:07:02.949 ==================================================================================== 00:07:02.949 Total 20064/s 2128 MiB/s 0 0' 00:07:02.949 06:37:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.949 06:37:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.949 06:37:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.949 06:37:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.949 06:37:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.949 06:37:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.949 06:37:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.949 06:37:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.949 06:37:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.949 06:37:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.949 06:37:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.949 06:37:16 -- accel/accel.sh@42 -- # jq -r . 00:07:02.949 [2024-12-14 06:37:16.859865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.949 [2024-12-14 06:37:16.860254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59510 ] 00:07:03.208 [2024-12-14 06:37:16.995534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.208 [2024-12-14 06:37:17.103106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.208 [2024-12-14 06:37:17.103177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.208 [2024-12-14 06:37:17.103313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.208 [2024-12-14 06:37:17.103315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=0xf 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=decompress 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=software 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=32 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=32 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=1 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val=Yes 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.208 06:37:17 -- accel/accel.sh@21 -- # val= 00:07:03.208 06:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.208 06:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.602 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.602 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.602 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.602 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.602 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.602 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.602 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.603 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.603 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.603 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.603 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.603 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.603 06:37:18 -- accel/accel.sh@21 -- # val= 00:07:04.603 06:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.603 ************************************ 00:07:04.603 END TEST accel_decomp_full_mcore 00:07:04.603 ************************************ 00:07:04.603 06:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.603 06:37:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.603 06:37:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.603 06:37:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.603 00:07:04.603 real 0m3.225s 00:07:04.603 user 0m9.868s 00:07:04.603 sys 0m0.346s 00:07:04.603 06:37:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.603 06:37:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.603 06:37:18 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.603 06:37:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:04.603 06:37:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.603 06:37:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.603 ************************************ 00:07:04.603 START TEST accel_decomp_mthread 00:07:04.603 ************************************ 00:07:04.603 06:37:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.603 06:37:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.603 06:37:18 -- accel/accel.sh@17 -- # local accel_module 00:07:04.603 06:37:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.603 06:37:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.603 06:37:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.603 06:37:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.603 06:37:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.603 06:37:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.603 06:37:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.603 06:37:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.603 06:37:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.603 06:37:18 -- accel/accel.sh@42 -- # jq -r . 00:07:04.603 [2024-12-14 06:37:18.524808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.603 [2024-12-14 06:37:18.525063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:07:04.862 [2024-12-14 06:37:18.652828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.862 [2024-12-14 06:37:18.738623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.238 06:37:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.238 00:07:06.238 SPDK Configuration: 00:07:06.238 Core mask: 0x1 00:07:06.238 00:07:06.238 Accel Perf Configuration: 00:07:06.238 Workload Type: decompress 00:07:06.238 Transfer size: 4096 bytes 00:07:06.238 Vector count 1 00:07:06.238 Module: software 00:07:06.238 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.238 Queue depth: 32 00:07:06.238 Allocate depth: 32 00:07:06.238 # threads/core: 2 00:07:06.238 Run time: 1 seconds 00:07:06.238 Verify: Yes 00:07:06.238 00:07:06.238 Running for 1 seconds... 00:07:06.238 00:07:06.238 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.238 ------------------------------------------------------------------------------------ 00:07:06.238 0,1 42784/s 78 MiB/s 0 0 00:07:06.238 0,0 42656/s 78 MiB/s 0 0 00:07:06.238 ==================================================================================== 00:07:06.238 Total 85440/s 333 MiB/s 0 0' 00:07:06.238 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.238 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.238 06:37:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.238 06:37:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.238 06:37:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.238 06:37:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.238 06:37:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.238 06:37:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.238 06:37:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.238 06:37:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.238 06:37:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.238 06:37:20 -- accel/accel.sh@42 -- # jq -r . 00:07:06.238 [2024-12-14 06:37:20.076115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.238 [2024-12-14 06:37:20.076210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59567 ] 00:07:06.238 [2024-12-14 06:37:20.206041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.497 [2024-12-14 06:37:20.296064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=0x1 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=decompress 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=software 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=32 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=32 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=2 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val=Yes 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.497 06:37:20 -- accel/accel.sh@21 -- # val= 00:07:06.497 06:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.497 06:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@21 -- # val= 00:07:07.874 06:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.874 06:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.874 06:37:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.874 06:37:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.874 06:37:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.874 00:07:07.874 real 0m3.102s 00:07:07.874 user 0m2.626s 00:07:07.874 sys 0m0.272s 00:07:07.874 ************************************ 00:07:07.874 END TEST accel_decomp_mthread 00:07:07.874 ************************************ 00:07:07.874 06:37:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.874 06:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.874 06:37:21 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.874 06:37:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:07.874 06:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.874 06:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.874 ************************************ 00:07:07.874 START TEST accel_deomp_full_mthread 00:07:07.874 ************************************ 00:07:07.874 06:37:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.874 06:37:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.874 06:37:21 -- accel/accel.sh@17 -- # local accel_module 00:07:07.874 06:37:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.874 06:37:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.874 06:37:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.874 06:37:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.874 06:37:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.874 06:37:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.874 06:37:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.874 06:37:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.874 06:37:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.874 06:37:21 -- accel/accel.sh@42 -- # jq -r . 00:07:07.874 [2024-12-14 06:37:21.682829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.874 [2024-12-14 06:37:21.683560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59607 ] 00:07:07.874 [2024-12-14 06:37:21.822486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.133 [2024-12-14 06:37:21.923471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.508 06:37:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.508 00:07:09.508 SPDK Configuration: 00:07:09.508 Core mask: 0x1 00:07:09.508 00:07:09.508 Accel Perf Configuration: 00:07:09.508 Workload Type: decompress 00:07:09.508 Transfer size: 111250 bytes 00:07:09.508 Vector count 1 00:07:09.508 Module: software 00:07:09.508 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.508 Queue depth: 32 00:07:09.508 Allocate depth: 32 00:07:09.508 # threads/core: 2 00:07:09.508 Run time: 1 seconds 00:07:09.508 Verify: Yes 00:07:09.508 00:07:09.508 Running for 1 seconds... 00:07:09.508 00:07:09.508 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.508 ------------------------------------------------------------------------------------ 00:07:09.508 0,1 2848/s 117 MiB/s 0 0 00:07:09.508 0,0 2848/s 117 MiB/s 0 0 00:07:09.508 ==================================================================================== 00:07:09.508 Total 5696/s 604 MiB/s 0 0' 00:07:09.508 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.508 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.508 06:37:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.508 06:37:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.508 06:37:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.508 06:37:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.508 06:37:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.508 06:37:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.508 06:37:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.508 06:37:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.508 06:37:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.508 06:37:23 -- accel/accel.sh@42 -- # jq -r . 00:07:09.508 [2024-12-14 06:37:23.265057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.508 [2024-12-14 06:37:23.265152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:07:09.508 [2024-12-14 06:37:23.397157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.508 [2024-12-14 06:37:23.493731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=0x1 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=decompress 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=software 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=32 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=32 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=2 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val=Yes 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.766 06:37:23 -- accel/accel.sh@21 -- # val= 00:07:09.766 06:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.766 06:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@21 -- # val= 00:07:11.142 06:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.142 06:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.142 06:37:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.142 06:37:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.142 06:37:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.142 00:07:11.142 real 0m3.168s 00:07:11.142 user 0m2.693s 00:07:11.142 sys 0m0.271s 00:07:11.142 06:37:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.142 06:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.142 ************************************ 00:07:11.142 END TEST accel_deomp_full_mthread 00:07:11.142 ************************************ 00:07:11.142 06:37:24 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:11.142 06:37:24 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.142 06:37:24 -- accel/accel.sh@129 -- # build_accel_config 00:07:11.142 06:37:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:11.142 06:37:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.142 06:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.142 06:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.142 06:37:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.142 06:37:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.142 06:37:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.142 06:37:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.142 06:37:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.142 06:37:24 -- accel/accel.sh@42 -- # jq -r . 00:07:11.142 ************************************ 00:07:11.142 START TEST accel_dif_functional_tests 00:07:11.142 ************************************ 00:07:11.142 06:37:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.142 [2024-12-14 06:37:24.936644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.142 [2024-12-14 06:37:24.936906] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:07:11.143 [2024-12-14 06:37:25.075458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.401 [2024-12-14 06:37:25.164089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.401 [2024-12-14 06:37:25.164225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.401 [2024-12-14 06:37:25.164229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.401 00:07:11.401 00:07:11.401 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.401 http://cunit.sourceforge.net/ 00:07:11.401 00:07:11.401 00:07:11.401 Suite: accel_dif 00:07:11.401 Test: verify: DIF generated, GUARD check ...passed 00:07:11.401 Test: verify: DIF generated, APPTAG check ...passed 00:07:11.401 Test: verify: DIF generated, REFTAG check ...passed 00:07:11.401 Test: verify: DIF not generated, GUARD check ...[2024-12-14 06:37:25.284455] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.401 [2024-12-14 06:37:25.284690] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.401 passed 00:07:11.401 Test: verify: DIF not generated, APPTAG check ...[2024-12-14 06:37:25.284855] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.401 [2024-12-14 06:37:25.285095] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.401 passed 00:07:11.401 Test: verify: DIF not generated, REFTAG check ...[2024-12-14 06:37:25.285285] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.401 [2024-12-14 06:37:25.285375] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.401 passed 00:07:11.401 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:11.401 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-14 06:37:25.285749] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:11.401 passed 00:07:11.401 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:11.401 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:11.401 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:11.401 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-14 06:37:25.286440] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:11.401 passed 00:07:11.401 Test: generate copy: DIF generated, GUARD check ...passed 00:07:11.401 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:11.401 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:11.401 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:11.401 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:11.401 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:11.401 Test: generate copy: iovecs-len validate ...[2024-12-14 06:37:25.287654] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:07:11.401 Test: generate copy: buffer alignment validate ...passed 00:07:11.401 00:07:11.401 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.401 suites 1 1 n/a 0 0 00:07:11.401 tests 20 20 20 0 0 00:07:11.401 asserts 204 204 204 0 n/a 00:07:11.401 00:07:11.401 Elapsed time = 0.009 seconds 00:07:11.401 with block_size. 00:07:11.659 00:07:11.659 real 0m0.707s 00:07:11.659 user 0m1.019s 00:07:11.659 sys 0m0.186s 00:07:11.659 06:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.659 ************************************ 00:07:11.659 END TEST accel_dif_functional_tests 00:07:11.659 ************************************ 00:07:11.659 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.659 ************************************ 00:07:11.659 END TEST accel 00:07:11.659 ************************************ 00:07:11.659 00:07:11.659 real 1m8.776s 00:07:11.659 user 1m12.505s 00:07:11.659 sys 0m7.635s 00:07:11.659 06:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.659 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.918 06:37:25 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:11.918 06:37:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:11.918 06:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.918 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.918 ************************************ 00:07:11.918 START TEST accel_rpc 00:07:11.918 ************************************ 00:07:11.918 06:37:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:11.918 * Looking for test storage... 00:07:11.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:11.918 06:37:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:11.918 06:37:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:11.918 06:37:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:11.918 06:37:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:11.918 06:37:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:11.918 06:37:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:11.919 06:37:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:11.919 06:37:25 -- scripts/common.sh@335 -- # IFS=.-: 00:07:11.919 06:37:25 -- scripts/common.sh@335 -- # read -ra ver1 00:07:11.919 06:37:25 -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.919 06:37:25 -- scripts/common.sh@336 -- # read -ra ver2 00:07:11.919 06:37:25 -- scripts/common.sh@337 -- # local 'op=<' 00:07:11.919 06:37:25 -- scripts/common.sh@339 -- # ver1_l=2 00:07:11.919 06:37:25 -- scripts/common.sh@340 -- # ver2_l=1 00:07:11.919 06:37:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:11.919 06:37:25 -- scripts/common.sh@343 -- # case "$op" in 00:07:11.919 06:37:25 -- scripts/common.sh@344 -- # : 1 00:07:11.919 06:37:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:11.919 06:37:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.919 06:37:25 -- scripts/common.sh@364 -- # decimal 1 00:07:11.919 06:37:25 -- scripts/common.sh@352 -- # local d=1 00:07:11.919 06:37:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.919 06:37:25 -- scripts/common.sh@354 -- # echo 1 00:07:11.919 06:37:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:11.919 06:37:25 -- scripts/common.sh@365 -- # decimal 2 00:07:11.919 06:37:25 -- scripts/common.sh@352 -- # local d=2 00:07:11.919 06:37:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.919 06:37:25 -- scripts/common.sh@354 -- # echo 2 00:07:11.919 06:37:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:11.919 06:37:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:11.919 06:37:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:11.919 06:37:25 -- scripts/common.sh@367 -- # return 0 00:07:11.919 06:37:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.919 06:37:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:11.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.919 --rc genhtml_branch_coverage=1 00:07:11.919 --rc genhtml_function_coverage=1 00:07:11.919 --rc genhtml_legend=1 00:07:11.919 --rc geninfo_all_blocks=1 00:07:11.919 --rc geninfo_unexecuted_blocks=1 00:07:11.919 00:07:11.919 ' 00:07:11.919 06:37:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:11.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.919 --rc genhtml_branch_coverage=1 00:07:11.919 --rc genhtml_function_coverage=1 00:07:11.919 --rc genhtml_legend=1 00:07:11.919 --rc geninfo_all_blocks=1 00:07:11.919 --rc geninfo_unexecuted_blocks=1 00:07:11.919 00:07:11.919 ' 00:07:11.919 06:37:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:11.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.919 --rc genhtml_branch_coverage=1 00:07:11.919 --rc genhtml_function_coverage=1 00:07:11.919 --rc genhtml_legend=1 00:07:11.919 --rc geninfo_all_blocks=1 00:07:11.919 --rc geninfo_unexecuted_blocks=1 00:07:11.919 00:07:11.919 ' 00:07:11.919 06:37:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:11.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.919 --rc genhtml_branch_coverage=1 00:07:11.919 --rc genhtml_function_coverage=1 00:07:11.919 --rc genhtml_legend=1 00:07:11.919 --rc geninfo_all_blocks=1 00:07:11.919 --rc geninfo_unexecuted_blocks=1 00:07:11.919 00:07:11.919 ' 00:07:11.919 06:37:25 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.919 06:37:25 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59739 00:07:11.919 06:37:25 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:11.919 06:37:25 -- accel/accel_rpc.sh@15 -- # waitforlisten 59739 00:07:11.919 06:37:25 -- common/autotest_common.sh@829 -- # '[' -z 59739 ']' 00:07:11.919 06:37:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.919 06:37:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.919 06:37:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.919 06:37:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.919 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.919 [2024-12-14 06:37:25.897577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.919 [2024-12-14 06:37:25.897862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:07:12.178 [2024-12-14 06:37:26.028167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.178 [2024-12-14 06:37:26.143343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.178 [2024-12-14 06:37:26.143783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.114 06:37:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.114 06:37:26 -- common/autotest_common.sh@862 -- # return 0 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:13.114 06:37:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.114 06:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.114 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.114 ************************************ 00:07:13.114 START TEST accel_assign_opcode 00:07:13.114 ************************************ 00:07:13.114 06:37:26 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:13.114 06:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.114 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.114 [2024-12-14 06:37:26.944490] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:13.114 06:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:13.114 06:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.114 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.114 [2024-12-14 06:37:26.952481] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:13.114 06:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.114 06:37:26 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:13.114 06:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.114 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.373 06:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.373 06:37:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:13.373 06:37:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:13.373 06:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.373 06:37:27 -- accel/accel_rpc.sh@42 -- # grep software 00:07:13.373 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.373 06:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.373 software 00:07:13.373 ************************************ 00:07:13.373 END TEST accel_assign_opcode 00:07:13.373 ************************************ 00:07:13.373 00:07:13.373 real 0m0.365s 00:07:13.373 user 0m0.054s 00:07:13.373 sys 0m0.014s 00:07:13.373 06:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.373 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.373 06:37:27 -- accel/accel_rpc.sh@55 -- # killprocess 59739 00:07:13.373 06:37:27 -- common/autotest_common.sh@936 -- # '[' -z 59739 ']' 00:07:13.373 06:37:27 -- common/autotest_common.sh@940 -- # kill -0 59739 00:07:13.373 06:37:27 -- common/autotest_common.sh@941 -- # uname 00:07:13.373 06:37:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.373 06:37:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59739 00:07:13.631 killing process with pid 59739 00:07:13.631 06:37:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:13.631 06:37:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:13.631 06:37:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59739' 00:07:13.631 06:37:27 -- common/autotest_common.sh@955 -- # kill 59739 00:07:13.631 06:37:27 -- common/autotest_common.sh@960 -- # wait 59739 00:07:14.199 ************************************ 00:07:14.199 END TEST accel_rpc 00:07:14.199 ************************************ 00:07:14.199 00:07:14.199 real 0m2.237s 00:07:14.199 user 0m2.265s 00:07:14.199 sys 0m0.566s 00:07:14.199 06:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.199 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:14.199 06:37:27 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.199 06:37:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.199 06:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.199 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:14.199 ************************************ 00:07:14.199 START TEST app_cmdline 00:07:14.199 ************************************ 00:07:14.199 06:37:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.199 * Looking for test storage... 00:07:14.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:14.199 06:37:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:14.199 06:37:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:14.199 06:37:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:14.199 06:37:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:14.199 06:37:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:14.199 06:37:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:14.199 06:37:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:14.199 06:37:28 -- scripts/common.sh@335 -- # IFS=.-: 00:07:14.199 06:37:28 -- scripts/common.sh@335 -- # read -ra ver1 00:07:14.199 06:37:28 -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.199 06:37:28 -- scripts/common.sh@336 -- # read -ra ver2 00:07:14.199 06:37:28 -- scripts/common.sh@337 -- # local 'op=<' 00:07:14.199 06:37:28 -- scripts/common.sh@339 -- # ver1_l=2 00:07:14.199 06:37:28 -- scripts/common.sh@340 -- # ver2_l=1 00:07:14.199 06:37:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:14.199 06:37:28 -- scripts/common.sh@343 -- # case "$op" in 00:07:14.199 06:37:28 -- scripts/common.sh@344 -- # : 1 00:07:14.199 06:37:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:14.199 06:37:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.199 06:37:28 -- scripts/common.sh@364 -- # decimal 1 00:07:14.199 06:37:28 -- scripts/common.sh@352 -- # local d=1 00:07:14.199 06:37:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.199 06:37:28 -- scripts/common.sh@354 -- # echo 1 00:07:14.199 06:37:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:14.199 06:37:28 -- scripts/common.sh@365 -- # decimal 2 00:07:14.199 06:37:28 -- scripts/common.sh@352 -- # local d=2 00:07:14.199 06:37:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.199 06:37:28 -- scripts/common.sh@354 -- # echo 2 00:07:14.199 06:37:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:14.199 06:37:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:14.199 06:37:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:14.199 06:37:28 -- scripts/common.sh@367 -- # return 0 00:07:14.199 06:37:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.199 06:37:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:14.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.199 --rc genhtml_branch_coverage=1 00:07:14.199 --rc genhtml_function_coverage=1 00:07:14.199 --rc genhtml_legend=1 00:07:14.199 --rc geninfo_all_blocks=1 00:07:14.199 --rc geninfo_unexecuted_blocks=1 00:07:14.199 00:07:14.199 ' 00:07:14.199 06:37:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:14.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.199 --rc genhtml_branch_coverage=1 00:07:14.199 --rc genhtml_function_coverage=1 00:07:14.199 --rc genhtml_legend=1 00:07:14.199 --rc geninfo_all_blocks=1 00:07:14.199 --rc geninfo_unexecuted_blocks=1 00:07:14.199 00:07:14.199 ' 00:07:14.199 06:37:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:14.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.199 --rc genhtml_branch_coverage=1 00:07:14.199 --rc genhtml_function_coverage=1 00:07:14.199 --rc genhtml_legend=1 00:07:14.199 --rc geninfo_all_blocks=1 00:07:14.199 --rc geninfo_unexecuted_blocks=1 00:07:14.199 00:07:14.199 ' 00:07:14.199 06:37:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:14.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.199 --rc genhtml_branch_coverage=1 00:07:14.199 --rc genhtml_function_coverage=1 00:07:14.199 --rc genhtml_legend=1 00:07:14.199 --rc geninfo_all_blocks=1 00:07:14.199 --rc geninfo_unexecuted_blocks=1 00:07:14.199 00:07:14.199 ' 00:07:14.199 06:37:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:14.199 06:37:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59857 00:07:14.199 06:37:28 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:14.200 06:37:28 -- app/cmdline.sh@18 -- # waitforlisten 59857 00:07:14.200 06:37:28 -- common/autotest_common.sh@829 -- # '[' -z 59857 ']' 00:07:14.200 06:37:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.200 06:37:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.200 06:37:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.200 06:37:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.200 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.458 [2024-12-14 06:37:28.207861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.458 [2024-12-14 06:37:28.208267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:07:14.458 [2024-12-14 06:37:28.347782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.716 [2024-12-14 06:37:28.468547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.716 [2024-12-14 06:37:28.469027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.284 06:37:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.284 06:37:29 -- common/autotest_common.sh@862 -- # return 0 00:07:15.284 06:37:29 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:15.542 { 00:07:15.542 "fields": { 00:07:15.543 "commit": "c13c99a5e", 00:07:15.543 "major": 24, 00:07:15.543 "minor": 1, 00:07:15.543 "patch": 1, 00:07:15.543 "suffix": "-pre" 00:07:15.543 }, 00:07:15.543 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:15.543 } 00:07:15.543 06:37:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:15.543 06:37:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:15.543 06:37:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:15.543 06:37:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:15.543 06:37:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:15.543 06:37:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:15.543 06:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.543 06:37:29 -- app/cmdline.sh@26 -- # sort 00:07:15.543 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.543 06:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.543 06:37:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:15.543 06:37:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:15.543 06:37:29 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.543 06:37:29 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.543 06:37:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.543 06:37:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.543 06:37:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.543 06:37:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.543 06:37:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.543 06:37:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.543 06:37:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.543 06:37:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.543 06:37:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:15.543 06:37:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.110 2024/12/14 06:37:29 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:16.110 request: 00:07:16.110 { 00:07:16.110 "method": "env_dpdk_get_mem_stats", 00:07:16.110 "params": {} 00:07:16.110 } 00:07:16.110 Got JSON-RPC error response 00:07:16.110 GoRPCClient: error on JSON-RPC call 00:07:16.110 06:37:29 -- common/autotest_common.sh@653 -- # es=1 00:07:16.110 06:37:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.110 06:37:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.110 06:37:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.110 06:37:29 -- app/cmdline.sh@1 -- # killprocess 59857 00:07:16.110 06:37:29 -- common/autotest_common.sh@936 -- # '[' -z 59857 ']' 00:07:16.110 06:37:29 -- common/autotest_common.sh@940 -- # kill -0 59857 00:07:16.110 06:37:29 -- common/autotest_common.sh@941 -- # uname 00:07:16.110 06:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.110 06:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59857 00:07:16.110 06:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.110 06:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.110 killing process with pid 59857 00:07:16.110 06:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59857' 00:07:16.110 06:37:29 -- common/autotest_common.sh@955 -- # kill 59857 00:07:16.110 06:37:29 -- common/autotest_common.sh@960 -- # wait 59857 00:07:16.678 ************************************ 00:07:16.678 END TEST app_cmdline 00:07:16.678 ************************************ 00:07:16.678 00:07:16.678 real 0m2.399s 00:07:16.678 user 0m2.882s 00:07:16.678 sys 0m0.583s 00:07:16.678 06:37:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.678 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 06:37:30 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.678 06:37:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.678 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 ************************************ 00:07:16.678 START TEST version 00:07:16.678 ************************************ 00:07:16.678 06:37:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.678 * Looking for test storage... 00:07:16.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:16.678 06:37:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:16.678 06:37:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:16.678 06:37:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.678 06:37:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.678 06:37:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.678 06:37:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.678 06:37:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.678 06:37:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.678 06:37:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.678 06:37:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.678 06:37:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.678 06:37:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.678 06:37:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.678 06:37:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.678 06:37:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.678 06:37:30 -- scripts/common.sh@344 -- # : 1 00:07:16.678 06:37:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.678 06:37:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.678 06:37:30 -- scripts/common.sh@364 -- # decimal 1 00:07:16.678 06:37:30 -- scripts/common.sh@352 -- # local d=1 00:07:16.678 06:37:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.678 06:37:30 -- scripts/common.sh@354 -- # echo 1 00:07:16.678 06:37:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.678 06:37:30 -- scripts/common.sh@365 -- # decimal 2 00:07:16.678 06:37:30 -- scripts/common.sh@352 -- # local d=2 00:07:16.678 06:37:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.678 06:37:30 -- scripts/common.sh@354 -- # echo 2 00:07:16.678 06:37:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.678 06:37:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.678 06:37:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.678 06:37:30 -- scripts/common.sh@367 -- # return 0 00:07:16.678 06:37:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.678 --rc genhtml_branch_coverage=1 00:07:16.678 --rc genhtml_function_coverage=1 00:07:16.678 --rc genhtml_legend=1 00:07:16.678 --rc geninfo_all_blocks=1 00:07:16.678 --rc geninfo_unexecuted_blocks=1 00:07:16.678 00:07:16.678 ' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.678 --rc genhtml_branch_coverage=1 00:07:16.678 --rc genhtml_function_coverage=1 00:07:16.678 --rc genhtml_legend=1 00:07:16.678 --rc geninfo_all_blocks=1 00:07:16.678 --rc geninfo_unexecuted_blocks=1 00:07:16.678 00:07:16.678 ' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.678 --rc genhtml_branch_coverage=1 00:07:16.678 --rc genhtml_function_coverage=1 00:07:16.678 --rc genhtml_legend=1 00:07:16.678 --rc geninfo_all_blocks=1 00:07:16.678 --rc geninfo_unexecuted_blocks=1 00:07:16.678 00:07:16.678 ' 00:07:16.678 06:37:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.678 --rc genhtml_branch_coverage=1 00:07:16.678 --rc genhtml_function_coverage=1 00:07:16.678 --rc genhtml_legend=1 00:07:16.678 --rc geninfo_all_blocks=1 00:07:16.678 --rc geninfo_unexecuted_blocks=1 00:07:16.678 00:07:16.678 ' 00:07:16.678 06:37:30 -- app/version.sh@17 -- # get_header_version major 00:07:16.678 06:37:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.678 06:37:30 -- app/version.sh@14 -- # cut -f2 00:07:16.678 06:37:30 -- app/version.sh@14 -- # tr -d '"' 00:07:16.678 06:37:30 -- app/version.sh@17 -- # major=24 00:07:16.678 06:37:30 -- app/version.sh@18 -- # get_header_version minor 00:07:16.678 06:37:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.678 06:37:30 -- app/version.sh@14 -- # tr -d '"' 00:07:16.678 06:37:30 -- app/version.sh@14 -- # cut -f2 00:07:16.678 06:37:30 -- app/version.sh@18 -- # minor=1 00:07:16.678 06:37:30 -- app/version.sh@19 -- # get_header_version patch 00:07:16.678 06:37:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.678 06:37:30 -- app/version.sh@14 -- # cut -f2 00:07:16.678 06:37:30 -- app/version.sh@14 -- # tr -d '"' 00:07:16.678 06:37:30 -- app/version.sh@19 -- # patch=1 00:07:16.678 06:37:30 -- app/version.sh@20 -- # get_header_version suffix 00:07:16.678 06:37:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.678 06:37:30 -- app/version.sh@14 -- # cut -f2 00:07:16.678 06:37:30 -- app/version.sh@14 -- # tr -d '"' 00:07:16.678 06:37:30 -- app/version.sh@20 -- # suffix=-pre 00:07:16.678 06:37:30 -- app/version.sh@22 -- # version=24.1 00:07:16.678 06:37:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.678 06:37:30 -- app/version.sh@25 -- # version=24.1.1 00:07:16.678 06:37:30 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:16.678 06:37:30 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:16.678 06:37:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.936 06:37:30 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:16.936 06:37:30 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:16.936 00:07:16.936 real 0m0.251s 00:07:16.936 user 0m0.177s 00:07:16.936 sys 0m0.111s 00:07:16.936 06:37:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.936 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.937 ************************************ 00:07:16.937 END TEST version 00:07:16.937 ************************************ 00:07:16.937 06:37:30 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@191 -- # uname -s 00:07:16.937 06:37:30 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:16.937 06:37:30 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:16.937 06:37:30 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:16.937 06:37:30 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:16.937 06:37:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.937 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.937 06:37:30 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:16.937 06:37:30 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:16.937 06:37:30 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:16.937 06:37:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:16.937 06:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.937 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.937 ************************************ 00:07:16.937 START TEST nvmf_tcp 00:07:16.937 ************************************ 00:07:16.937 06:37:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:16.937 * Looking for test storage... 00:07:16.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:16.937 06:37:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:16.937 06:37:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.937 06:37:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.317 06:37:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.317 06:37:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.317 06:37:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.317 06:37:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.317 06:37:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.317 06:37:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.317 06:37:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.317 06:37:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.317 06:37:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.317 06:37:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.317 06:37:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.317 06:37:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.317 06:37:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.317 06:37:30 -- scripts/common.sh@344 -- # : 1 00:07:17.317 06:37:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.317 06:37:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.317 06:37:30 -- scripts/common.sh@364 -- # decimal 1 00:07:17.317 06:37:30 -- scripts/common.sh@352 -- # local d=1 00:07:17.317 06:37:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.317 06:37:30 -- scripts/common.sh@354 -- # echo 1 00:07:17.317 06:37:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.317 06:37:30 -- scripts/common.sh@365 -- # decimal 2 00:07:17.317 06:37:30 -- scripts/common.sh@352 -- # local d=2 00:07:17.317 06:37:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.317 06:37:30 -- scripts/common.sh@354 -- # echo 2 00:07:17.317 06:37:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.317 06:37:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.317 06:37:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.317 06:37:30 -- scripts/common.sh@367 -- # return 0 00:07:17.317 06:37:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.317 06:37:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.317 --rc genhtml_branch_coverage=1 00:07:17.317 --rc genhtml_function_coverage=1 00:07:17.317 --rc genhtml_legend=1 00:07:17.317 --rc geninfo_all_blocks=1 00:07:17.317 --rc geninfo_unexecuted_blocks=1 00:07:17.317 00:07:17.317 ' 00:07:17.317 06:37:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.317 --rc genhtml_branch_coverage=1 00:07:17.317 --rc genhtml_function_coverage=1 00:07:17.317 --rc genhtml_legend=1 00:07:17.317 --rc geninfo_all_blocks=1 00:07:17.317 --rc geninfo_unexecuted_blocks=1 00:07:17.317 00:07:17.317 ' 00:07:17.317 06:37:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.317 --rc genhtml_branch_coverage=1 00:07:17.317 --rc genhtml_function_coverage=1 00:07:17.317 --rc genhtml_legend=1 00:07:17.317 --rc geninfo_all_blocks=1 00:07:17.317 --rc geninfo_unexecuted_blocks=1 00:07:17.317 00:07:17.317 ' 00:07:17.317 06:37:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.317 --rc genhtml_branch_coverage=1 00:07:17.317 --rc genhtml_function_coverage=1 00:07:17.317 --rc genhtml_legend=1 00:07:17.317 --rc geninfo_all_blocks=1 00:07:17.317 --rc geninfo_unexecuted_blocks=1 00:07:17.317 00:07:17.317 ' 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.317 06:37:30 -- nvmf/common.sh@7 -- # uname -s 00:07:17.317 06:37:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.317 06:37:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.317 06:37:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.317 06:37:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.317 06:37:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.317 06:37:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.317 06:37:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.317 06:37:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.317 06:37:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.317 06:37:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.317 06:37:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:17.317 06:37:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:17.317 06:37:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.317 06:37:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.317 06:37:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:17.317 06:37:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.317 06:37:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.317 06:37:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.317 06:37:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.317 06:37:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.317 06:37:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.317 06:37:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.317 06:37:30 -- paths/export.sh@5 -- # export PATH 00:07:17.317 06:37:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.317 06:37:30 -- nvmf/common.sh@46 -- # : 0 00:07:17.317 06:37:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:17.317 06:37:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:17.317 06:37:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:17.317 06:37:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.317 06:37:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.317 06:37:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:17.317 06:37:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:17.317 06:37:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:17.317 06:37:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.317 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:17.317 06:37:30 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.317 06:37:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.317 06:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.317 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:17.317 ************************************ 00:07:17.317 START TEST nvmf_example 00:07:17.317 ************************************ 00:07:17.317 06:37:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.317 * Looking for test storage... 00:07:17.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:17.317 06:37:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.317 06:37:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.317 06:37:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.317 06:37:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.317 06:37:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.317 06:37:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.317 06:37:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.317 06:37:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.317 06:37:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.317 06:37:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.317 06:37:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.317 06:37:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.317 06:37:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.317 06:37:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.317 06:37:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.317 06:37:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.317 06:37:31 -- scripts/common.sh@344 -- # : 1 00:07:17.317 06:37:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.317 06:37:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.318 06:37:31 -- scripts/common.sh@364 -- # decimal 1 00:07:17.318 06:37:31 -- scripts/common.sh@352 -- # local d=1 00:07:17.318 06:37:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.318 06:37:31 -- scripts/common.sh@354 -- # echo 1 00:07:17.318 06:37:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.318 06:37:31 -- scripts/common.sh@365 -- # decimal 2 00:07:17.318 06:37:31 -- scripts/common.sh@352 -- # local d=2 00:07:17.318 06:37:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.318 06:37:31 -- scripts/common.sh@354 -- # echo 2 00:07:17.318 06:37:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.318 06:37:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.318 06:37:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.318 06:37:31 -- scripts/common.sh@367 -- # return 0 00:07:17.318 06:37:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.318 06:37:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.318 --rc genhtml_branch_coverage=1 00:07:17.318 --rc genhtml_function_coverage=1 00:07:17.318 --rc genhtml_legend=1 00:07:17.318 --rc geninfo_all_blocks=1 00:07:17.318 --rc geninfo_unexecuted_blocks=1 00:07:17.318 00:07:17.318 ' 00:07:17.318 06:37:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.318 --rc genhtml_branch_coverage=1 00:07:17.318 --rc genhtml_function_coverage=1 00:07:17.318 --rc genhtml_legend=1 00:07:17.318 --rc geninfo_all_blocks=1 00:07:17.318 --rc geninfo_unexecuted_blocks=1 00:07:17.318 00:07:17.318 ' 00:07:17.318 06:37:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.318 --rc genhtml_branch_coverage=1 00:07:17.318 --rc genhtml_function_coverage=1 00:07:17.318 --rc genhtml_legend=1 00:07:17.318 --rc geninfo_all_blocks=1 00:07:17.318 --rc geninfo_unexecuted_blocks=1 00:07:17.318 00:07:17.318 ' 00:07:17.318 06:37:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.318 --rc genhtml_branch_coverage=1 00:07:17.318 --rc genhtml_function_coverage=1 00:07:17.318 --rc genhtml_legend=1 00:07:17.318 --rc geninfo_all_blocks=1 00:07:17.318 --rc geninfo_unexecuted_blocks=1 00:07:17.318 00:07:17.318 ' 00:07:17.318 06:37:31 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.318 06:37:31 -- nvmf/common.sh@7 -- # uname -s 00:07:17.318 06:37:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.318 06:37:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.318 06:37:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.318 06:37:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.318 06:37:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.318 06:37:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.318 06:37:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.318 06:37:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.318 06:37:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.318 06:37:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:17.318 06:37:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:17.318 06:37:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.318 06:37:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.318 06:37:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:17.318 06:37:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.318 06:37:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.318 06:37:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.318 06:37:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.318 06:37:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.318 06:37:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.318 06:37:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.318 06:37:31 -- paths/export.sh@5 -- # export PATH 00:07:17.318 06:37:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.318 06:37:31 -- nvmf/common.sh@46 -- # : 0 00:07:17.318 06:37:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:17.318 06:37:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:17.318 06:37:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:17.318 06:37:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.318 06:37:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.318 06:37:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:17.318 06:37:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:17.318 06:37:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:17.318 06:37:31 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:17.318 06:37:31 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:17.318 06:37:31 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:17.318 06:37:31 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:17.318 06:37:31 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:17.318 06:37:31 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:17.318 06:37:31 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:17.318 06:37:31 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:17.318 06:37:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.318 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.318 06:37:31 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:17.318 06:37:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:17.318 06:37:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.318 06:37:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:17.318 06:37:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:17.318 06:37:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:17.318 06:37:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.318 06:37:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.318 06:37:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.318 06:37:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:17.318 06:37:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:17.318 06:37:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.318 06:37:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.318 06:37:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:17.318 06:37:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:17.318 06:37:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:17.318 06:37:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:17.318 06:37:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:17.318 06:37:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.318 06:37:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:17.318 06:37:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:17.318 06:37:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:17.318 06:37:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:17.318 06:37:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:17.318 Cannot find device "nvmf_init_br" 00:07:17.318 06:37:31 -- nvmf/common.sh@153 -- # true 00:07:17.318 06:37:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:17.318 Cannot find device "nvmf_tgt_br" 00:07:17.318 06:37:31 -- nvmf/common.sh@154 -- # true 00:07:17.318 06:37:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:17.318 Cannot find device "nvmf_tgt_br2" 00:07:17.318 06:37:31 -- nvmf/common.sh@155 -- # true 00:07:17.318 06:37:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:17.318 Cannot find device "nvmf_init_br" 00:07:17.318 06:37:31 -- nvmf/common.sh@156 -- # true 00:07:17.318 06:37:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:17.318 Cannot find device "nvmf_tgt_br" 00:07:17.318 06:37:31 -- nvmf/common.sh@157 -- # true 00:07:17.318 06:37:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:17.577 Cannot find device "nvmf_tgt_br2" 00:07:17.577 06:37:31 -- nvmf/common.sh@158 -- # true 00:07:17.577 06:37:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:17.577 Cannot find device "nvmf_br" 00:07:17.577 06:37:31 -- nvmf/common.sh@159 -- # true 00:07:17.577 06:37:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:17.577 Cannot find device "nvmf_init_if" 00:07:17.577 06:37:31 -- nvmf/common.sh@160 -- # true 00:07:17.577 06:37:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:17.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.577 06:37:31 -- nvmf/common.sh@161 -- # true 00:07:17.577 06:37:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:17.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.577 06:37:31 -- nvmf/common.sh@162 -- # true 00:07:17.577 06:37:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:17.577 06:37:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:17.577 06:37:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:17.577 06:37:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:17.577 06:37:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:17.577 06:37:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:17.577 06:37:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:17.577 06:37:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:17.577 06:37:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:17.577 06:37:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:17.577 06:37:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:17.577 06:37:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:17.577 06:37:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:17.577 06:37:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:17.577 06:37:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:17.577 06:37:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:17.577 06:37:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:17.577 06:37:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:17.577 06:37:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:17.577 06:37:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:17.577 06:37:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:17.577 06:37:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:17.577 06:37:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:17.577 06:37:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:17.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:17.577 00:07:17.577 --- 10.0.0.2 ping statistics --- 00:07:17.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.577 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:17.577 06:37:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:17.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:17.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:17.577 00:07:17.577 --- 10.0.0.3 ping statistics --- 00:07:17.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.577 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:17.577 06:37:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:17.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:17.577 00:07:17.577 --- 10.0.0.1 ping statistics --- 00:07:17.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.577 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:17.577 06:37:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.577 06:37:31 -- nvmf/common.sh@421 -- # return 0 00:07:17.577 06:37:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:17.577 06:37:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.577 06:37:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:17.577 06:37:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:17.577 06:37:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.577 06:37:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:17.577 06:37:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:17.888 06:37:31 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:17.888 06:37:31 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:17.888 06:37:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.889 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.889 06:37:31 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:17.889 06:37:31 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:17.889 06:37:31 -- target/nvmf_example.sh@34 -- # nvmfpid=60245 00:07:17.889 06:37:31 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:17.889 06:37:31 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.889 06:37:31 -- target/nvmf_example.sh@36 -- # waitforlisten 60245 00:07:17.889 06:37:31 -- common/autotest_common.sh@829 -- # '[' -z 60245 ']' 00:07:17.889 06:37:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.889 06:37:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.889 06:37:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.889 06:37:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.889 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.926 06:37:32 -- common/autotest_common.sh@862 -- # return 0 00:07:18.926 06:37:32 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:18.926 06:37:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.926 06:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.926 06:37:32 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:18.926 06:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.926 06:37:32 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:18.926 06:37:32 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.926 06:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.926 06:37:32 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:18.926 06:37:32 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.926 06:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.926 06:37:32 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.926 06:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.926 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.926 06:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.926 06:37:32 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:18.926 06:37:32 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:31.189 Initializing NVMe Controllers 00:07:31.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.189 Initialization complete. Launching workers. 00:07:31.189 ======================================================== 00:07:31.189 Latency(us) 00:07:31.189 Device Information : IOPS MiB/s Average min max 00:07:31.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15801.01 61.72 4050.06 685.61 21097.07 00:07:31.189 ======================================================== 00:07:31.189 Total : 15801.01 61.72 4050.06 685.61 21097.07 00:07:31.189 00:07:31.189 06:37:43 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.189 06:37:43 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.189 06:37:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:31.189 06:37:43 -- nvmf/common.sh@116 -- # sync 00:07:31.189 06:37:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:31.189 06:37:43 -- nvmf/common.sh@119 -- # set +e 00:07:31.189 06:37:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:31.189 06:37:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:31.189 rmmod nvme_tcp 00:07:31.189 rmmod nvme_fabrics 00:07:31.189 rmmod nvme_keyring 00:07:31.189 06:37:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:31.189 06:37:43 -- nvmf/common.sh@123 -- # set -e 00:07:31.189 06:37:43 -- nvmf/common.sh@124 -- # return 0 00:07:31.189 06:37:43 -- nvmf/common.sh@477 -- # '[' -n 60245 ']' 00:07:31.189 06:37:43 -- nvmf/common.sh@478 -- # killprocess 60245 00:07:31.189 06:37:43 -- common/autotest_common.sh@936 -- # '[' -z 60245 ']' 00:07:31.189 06:37:43 -- common/autotest_common.sh@940 -- # kill -0 60245 00:07:31.189 06:37:43 -- common/autotest_common.sh@941 -- # uname 00:07:31.189 06:37:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.189 06:37:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60245 00:07:31.189 06:37:43 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:31.189 06:37:43 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:31.189 killing process with pid 60245 00:07:31.189 06:37:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60245' 00:07:31.189 06:37:43 -- common/autotest_common.sh@955 -- # kill 60245 00:07:31.189 06:37:43 -- common/autotest_common.sh@960 -- # wait 60245 00:07:31.189 nvmf threads initialize successfully 00:07:31.189 bdev subsystem init successfully 00:07:31.189 created a nvmf target service 00:07:31.189 create targets's poll groups done 00:07:31.189 all subsystems of target started 00:07:31.189 nvmf target is running 00:07:31.189 all subsystems of target stopped 00:07:31.189 destroy targets's poll groups done 00:07:31.189 destroyed the nvmf target service 00:07:31.189 bdev subsystem finish successfully 00:07:31.189 nvmf threads destroy successfully 00:07:31.190 06:37:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:31.190 06:37:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:31.190 06:37:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:31.190 06:37:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.190 06:37:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:31.190 06:37:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.190 06:37:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.190 06:37:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.190 06:37:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:31.190 06:37:43 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:31.190 06:37:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.190 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:07:31.190 00:07:31.190 real 0m12.530s 00:07:31.190 user 0m44.865s 00:07:31.190 sys 0m1.998s 00:07:31.190 06:37:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.190 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:07:31.190 ************************************ 00:07:31.190 END TEST nvmf_example 00:07:31.190 ************************************ 00:07:31.190 06:37:43 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:31.190 06:37:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.190 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:07:31.190 ************************************ 00:07:31.190 START TEST nvmf_filesystem 00:07:31.190 ************************************ 00:07:31.190 06:37:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:31.190 * Looking for test storage... 00:07:31.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.190 06:37:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.190 06:37:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.190 06:37:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.190 06:37:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.190 06:37:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.190 06:37:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.190 06:37:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.190 06:37:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.190 06:37:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.190 06:37:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.190 06:37:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.190 06:37:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.190 06:37:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.190 06:37:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.190 06:37:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.190 06:37:43 -- scripts/common.sh@344 -- # : 1 00:07:31.190 06:37:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.190 06:37:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.190 06:37:43 -- scripts/common.sh@364 -- # decimal 1 00:07:31.190 06:37:43 -- scripts/common.sh@352 -- # local d=1 00:07:31.190 06:37:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.190 06:37:43 -- scripts/common.sh@354 -- # echo 1 00:07:31.190 06:37:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.190 06:37:43 -- scripts/common.sh@365 -- # decimal 2 00:07:31.190 06:37:43 -- scripts/common.sh@352 -- # local d=2 00:07:31.190 06:37:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.190 06:37:43 -- scripts/common.sh@354 -- # echo 2 00:07:31.190 06:37:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.190 06:37:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.190 06:37:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.190 06:37:43 -- scripts/common.sh@367 -- # return 0 00:07:31.190 06:37:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.190 --rc genhtml_branch_coverage=1 00:07:31.190 --rc genhtml_function_coverage=1 00:07:31.190 --rc genhtml_legend=1 00:07:31.190 --rc geninfo_all_blocks=1 00:07:31.190 --rc geninfo_unexecuted_blocks=1 00:07:31.190 00:07:31.190 ' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.190 --rc genhtml_branch_coverage=1 00:07:31.190 --rc genhtml_function_coverage=1 00:07:31.190 --rc genhtml_legend=1 00:07:31.190 --rc geninfo_all_blocks=1 00:07:31.190 --rc geninfo_unexecuted_blocks=1 00:07:31.190 00:07:31.190 ' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.190 --rc genhtml_branch_coverage=1 00:07:31.190 --rc genhtml_function_coverage=1 00:07:31.190 --rc genhtml_legend=1 00:07:31.190 --rc geninfo_all_blocks=1 00:07:31.190 --rc geninfo_unexecuted_blocks=1 00:07:31.190 00:07:31.190 ' 00:07:31.190 06:37:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.190 --rc genhtml_branch_coverage=1 00:07:31.190 --rc genhtml_function_coverage=1 00:07:31.190 --rc genhtml_legend=1 00:07:31.190 --rc geninfo_all_blocks=1 00:07:31.190 --rc geninfo_unexecuted_blocks=1 00:07:31.190 00:07:31.190 ' 00:07:31.190 06:37:43 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:31.190 06:37:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:31.190 06:37:43 -- common/autotest_common.sh@34 -- # set -e 00:07:31.190 06:37:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:31.190 06:37:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:31.190 06:37:43 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:31.190 06:37:43 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:31.190 06:37:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:31.190 06:37:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:31.190 06:37:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:31.190 06:37:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:31.190 06:37:43 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:31.190 06:37:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:31.190 06:37:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:31.190 06:37:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:31.190 06:37:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:31.190 06:37:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:31.190 06:37:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:31.190 06:37:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:31.190 06:37:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:31.190 06:37:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:31.190 06:37:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:31.190 06:37:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:31.190 06:37:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:31.190 06:37:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:31.190 06:37:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:31.190 06:37:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:31.190 06:37:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:31.190 06:37:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:31.190 06:37:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:31.190 06:37:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:31.190 06:37:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:31.190 06:37:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:31.190 06:37:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:31.190 06:37:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:31.190 06:37:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:31.190 06:37:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:31.190 06:37:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:31.190 06:37:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:31.190 06:37:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:31.190 06:37:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:31.190 06:37:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:31.190 06:37:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:31.190 06:37:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:31.190 06:37:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:31.190 06:37:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:31.190 06:37:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:31.190 06:37:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:31.190 06:37:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:31.190 06:37:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:31.190 06:37:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:31.190 06:37:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:31.190 06:37:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:31.190 06:37:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:31.190 06:37:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:31.190 06:37:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:31.190 06:37:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:31.190 06:37:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:31.190 06:37:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:31.190 06:37:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:31.190 06:37:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:31.190 06:37:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:31.190 06:37:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:31.191 06:37:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:31.191 06:37:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:31.191 06:37:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:31.191 06:37:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:31.191 06:37:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:31.191 06:37:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:31.191 06:37:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:31.191 06:37:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:31.191 06:37:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:31.191 06:37:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:31.191 06:37:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:31.191 06:37:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:31.191 06:37:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:31.191 06:37:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:31.191 06:37:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:31.191 06:37:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:31.191 06:37:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:31.191 06:37:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:31.191 06:37:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:31.191 06:37:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:31.191 06:37:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:31.191 06:37:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:31.191 06:37:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:31.191 06:37:43 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:31.191 06:37:43 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:31.191 06:37:43 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:31.191 06:37:43 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:31.191 06:37:43 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:31.191 06:37:43 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:31.191 06:37:43 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:31.191 06:37:43 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:31.191 06:37:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:31.191 06:37:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:31.191 06:37:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:31.191 06:37:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:31.191 06:37:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:31.191 06:37:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:31.191 06:37:43 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:31.191 06:37:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:31.191 #define SPDK_CONFIG_H 00:07:31.191 #define SPDK_CONFIG_APPS 1 00:07:31.191 #define SPDK_CONFIG_ARCH native 00:07:31.191 #undef SPDK_CONFIG_ASAN 00:07:31.191 #define SPDK_CONFIG_AVAHI 1 00:07:31.191 #undef SPDK_CONFIG_CET 00:07:31.191 #define SPDK_CONFIG_COVERAGE 1 00:07:31.191 #define SPDK_CONFIG_CROSS_PREFIX 00:07:31.191 #undef SPDK_CONFIG_CRYPTO 00:07:31.191 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:31.191 #undef SPDK_CONFIG_CUSTOMOCF 00:07:31.191 #undef SPDK_CONFIG_DAOS 00:07:31.191 #define SPDK_CONFIG_DAOS_DIR 00:07:31.191 #define SPDK_CONFIG_DEBUG 1 00:07:31.191 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:31.191 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:31.191 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:31.191 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:31.191 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:31.191 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:31.191 #define SPDK_CONFIG_EXAMPLES 1 00:07:31.191 #undef SPDK_CONFIG_FC 00:07:31.191 #define SPDK_CONFIG_FC_PATH 00:07:31.191 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:31.191 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:31.191 #undef SPDK_CONFIG_FUSE 00:07:31.191 #undef SPDK_CONFIG_FUZZER 00:07:31.191 #define SPDK_CONFIG_FUZZER_LIB 00:07:31.191 #define SPDK_CONFIG_GOLANG 1 00:07:31.191 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:31.191 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:31.191 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:31.191 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:31.191 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:31.191 #define SPDK_CONFIG_IDXD 1 00:07:31.191 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:31.191 #undef SPDK_CONFIG_IPSEC_MB 00:07:31.191 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:31.191 #define SPDK_CONFIG_ISAL 1 00:07:31.191 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:31.191 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:31.191 #define SPDK_CONFIG_LIBDIR 00:07:31.191 #undef SPDK_CONFIG_LTO 00:07:31.191 #define SPDK_CONFIG_MAX_LCORES 00:07:31.191 #define SPDK_CONFIG_NVME_CUSE 1 00:07:31.191 #undef SPDK_CONFIG_OCF 00:07:31.191 #define SPDK_CONFIG_OCF_PATH 00:07:31.191 #define SPDK_CONFIG_OPENSSL_PATH 00:07:31.191 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:31.191 #undef SPDK_CONFIG_PGO_USE 00:07:31.191 #define SPDK_CONFIG_PREFIX /usr/local 00:07:31.191 #undef SPDK_CONFIG_RAID5F 00:07:31.191 #undef SPDK_CONFIG_RBD 00:07:31.191 #define SPDK_CONFIG_RDMA 1 00:07:31.191 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:31.191 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:31.191 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:31.191 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:31.191 #define SPDK_CONFIG_SHARED 1 00:07:31.191 #undef SPDK_CONFIG_SMA 00:07:31.191 #define SPDK_CONFIG_TESTS 1 00:07:31.191 #undef SPDK_CONFIG_TSAN 00:07:31.191 #define SPDK_CONFIG_UBLK 1 00:07:31.191 #define SPDK_CONFIG_UBSAN 1 00:07:31.191 #undef SPDK_CONFIG_UNIT_TESTS 00:07:31.191 #undef SPDK_CONFIG_URING 00:07:31.191 #define SPDK_CONFIG_URING_PATH 00:07:31.191 #undef SPDK_CONFIG_URING_ZNS 00:07:31.191 #define SPDK_CONFIG_USDT 1 00:07:31.191 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:31.191 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:31.191 #define SPDK_CONFIG_VFIO_USER 1 00:07:31.191 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:31.191 #define SPDK_CONFIG_VHOST 1 00:07:31.191 #define SPDK_CONFIG_VIRTIO 1 00:07:31.191 #undef SPDK_CONFIG_VTUNE 00:07:31.191 #define SPDK_CONFIG_VTUNE_DIR 00:07:31.191 #define SPDK_CONFIG_WERROR 1 00:07:31.191 #define SPDK_CONFIG_WPDK_DIR 00:07:31.191 #undef SPDK_CONFIG_XNVME 00:07:31.191 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:31.191 06:37:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:31.191 06:37:43 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.191 06:37:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.191 06:37:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.191 06:37:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.191 06:37:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.191 06:37:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.191 06:37:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.191 06:37:43 -- paths/export.sh@5 -- # export PATH 00:07:31.191 06:37:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.191 06:37:43 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:31.191 06:37:43 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:31.191 06:37:43 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:31.191 06:37:43 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:31.191 06:37:43 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:31.191 06:37:43 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:31.191 06:37:43 -- pm/common@16 -- # TEST_TAG=N/A 00:07:31.192 06:37:43 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:31.192 06:37:43 -- common/autotest_common.sh@52 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:31.192 06:37:43 -- common/autotest_common.sh@56 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:31.192 06:37:43 -- common/autotest_common.sh@58 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:31.192 06:37:43 -- common/autotest_common.sh@60 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:31.192 06:37:43 -- common/autotest_common.sh@62 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:31.192 06:37:43 -- common/autotest_common.sh@64 -- # : 00:07:31.192 06:37:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:31.192 06:37:43 -- common/autotest_common.sh@66 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:31.192 06:37:43 -- common/autotest_common.sh@68 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:31.192 06:37:43 -- common/autotest_common.sh@70 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:31.192 06:37:43 -- common/autotest_common.sh@72 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:31.192 06:37:43 -- common/autotest_common.sh@74 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:31.192 06:37:43 -- common/autotest_common.sh@76 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:31.192 06:37:43 -- common/autotest_common.sh@78 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:31.192 06:37:43 -- common/autotest_common.sh@80 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:31.192 06:37:43 -- common/autotest_common.sh@82 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:31.192 06:37:43 -- common/autotest_common.sh@84 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:31.192 06:37:43 -- common/autotest_common.sh@86 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:31.192 06:37:43 -- common/autotest_common.sh@88 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:31.192 06:37:43 -- common/autotest_common.sh@90 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:31.192 06:37:43 -- common/autotest_common.sh@92 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:31.192 06:37:43 -- common/autotest_common.sh@94 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:31.192 06:37:43 -- common/autotest_common.sh@96 -- # : tcp 00:07:31.192 06:37:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:31.192 06:37:43 -- common/autotest_common.sh@98 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:31.192 06:37:43 -- common/autotest_common.sh@100 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:31.192 06:37:43 -- common/autotest_common.sh@102 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:31.192 06:37:43 -- common/autotest_common.sh@104 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:31.192 06:37:43 -- common/autotest_common.sh@106 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:31.192 06:37:43 -- common/autotest_common.sh@108 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:31.192 06:37:43 -- common/autotest_common.sh@110 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:31.192 06:37:43 -- common/autotest_common.sh@112 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:31.192 06:37:43 -- common/autotest_common.sh@114 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:31.192 06:37:43 -- common/autotest_common.sh@116 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:31.192 06:37:43 -- common/autotest_common.sh@118 -- # : 00:07:31.192 06:37:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:31.192 06:37:43 -- common/autotest_common.sh@120 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:31.192 06:37:43 -- common/autotest_common.sh@122 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:31.192 06:37:43 -- common/autotest_common.sh@124 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:31.192 06:37:43 -- common/autotest_common.sh@126 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:31.192 06:37:43 -- common/autotest_common.sh@128 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:31.192 06:37:43 -- common/autotest_common.sh@130 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:31.192 06:37:43 -- common/autotest_common.sh@132 -- # : 00:07:31.192 06:37:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:31.192 06:37:43 -- common/autotest_common.sh@134 -- # : true 00:07:31.192 06:37:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:31.192 06:37:43 -- common/autotest_common.sh@136 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:31.192 06:37:43 -- common/autotest_common.sh@138 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:31.192 06:37:43 -- common/autotest_common.sh@140 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:31.192 06:37:43 -- common/autotest_common.sh@142 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:31.192 06:37:43 -- common/autotest_common.sh@144 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:31.192 06:37:43 -- common/autotest_common.sh@146 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:31.192 06:37:43 -- common/autotest_common.sh@148 -- # : 00:07:31.192 06:37:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:31.192 06:37:43 -- common/autotest_common.sh@150 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:31.192 06:37:43 -- common/autotest_common.sh@152 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:31.192 06:37:43 -- common/autotest_common.sh@154 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:31.192 06:37:43 -- common/autotest_common.sh@156 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:31.192 06:37:43 -- common/autotest_common.sh@158 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:31.192 06:37:43 -- common/autotest_common.sh@160 -- # : 0 00:07:31.192 06:37:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:31.192 06:37:43 -- common/autotest_common.sh@163 -- # : 00:07:31.192 06:37:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:31.192 06:37:43 -- common/autotest_common.sh@165 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:31.192 06:37:43 -- common/autotest_common.sh@167 -- # : 1 00:07:31.192 06:37:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:31.192 06:37:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:31.192 06:37:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.192 06:37:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.193 06:37:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:31.193 06:37:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:31.193 06:37:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:31.193 06:37:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:31.193 06:37:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.193 06:37:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.193 06:37:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.193 06:37:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.193 06:37:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:31.193 06:37:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:31.193 06:37:43 -- common/autotest_common.sh@196 -- # cat 00:07:31.193 06:37:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:31.193 06:37:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.193 06:37:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.193 06:37:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.193 06:37:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.193 06:37:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:31.193 06:37:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:31.193 06:37:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:31.193 06:37:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:31.193 06:37:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:31.193 06:37:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:31.193 06:37:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.193 06:37:43 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.193 06:37:43 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.193 06:37:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.193 06:37:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:31.193 06:37:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:31.193 06:37:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.193 06:37:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.193 06:37:43 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:31.193 06:37:43 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:31.193 06:37:43 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:31.193 06:37:43 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:31.193 06:37:43 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:31.193 06:37:43 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:31.193 06:37:43 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:31.193 06:37:43 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:31.193 06:37:43 -- common/autotest_common.sh@259 -- # valgrind= 00:07:31.193 06:37:43 -- common/autotest_common.sh@265 -- # uname -s 00:07:31.193 06:37:43 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:31.193 06:37:43 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:31.193 06:37:43 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:31.193 06:37:43 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:31.193 06:37:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:31.193 06:37:43 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:31.193 06:37:43 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:31.193 06:37:43 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:31.193 06:37:43 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:31.193 06:37:43 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:31.193 06:37:43 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:31.193 06:37:43 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:31.193 06:37:43 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:31.193 06:37:43 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:31.193 06:37:43 -- common/autotest_common.sh@319 -- # [[ -z 60491 ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@319 -- # kill -0 60491 00:07:31.193 06:37:43 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:31.193 06:37:43 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:31.193 06:37:43 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:31.193 06:37:43 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:31.193 06:37:43 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:31.193 06:37:43 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:31.193 06:37:43 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:31.193 06:37:43 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.UR7QLX 00:07:31.193 06:37:43 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:31.193 06:37:43 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:31.193 06:37:43 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.UR7QLX/tests/target /tmp/spdk.UR7QLX 00:07:31.193 06:37:43 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@328 -- # df -T 00:07:31.193 06:37:43 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016282624 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551144960 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016282624 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551144960 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:07:31.193 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:07:31.193 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:31.193 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.194 06:37:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:31.194 06:37:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:31.194 06:37:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=97263710208 00:07:31.194 06:37:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:31.194 06:37:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=2439069696 00:07:31.194 06:37:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.194 06:37:43 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:31.194 * Looking for test storage... 00:07:31.194 06:37:43 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:31.194 06:37:43 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:31.194 06:37:43 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.194 06:37:43 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:31.194 06:37:43 -- common/autotest_common.sh@373 -- # mount=/home 00:07:31.194 06:37:43 -- common/autotest_common.sh@375 -- # target_space=14016282624 00:07:31.194 06:37:43 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:31.194 06:37:43 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:31.194 06:37:43 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.194 06:37:43 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.194 06:37:43 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.194 06:37:43 -- common/autotest_common.sh@390 -- # return 0 00:07:31.194 06:37:43 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:31.194 06:37:43 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:31.194 06:37:43 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:31.194 06:37:43 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1682 -- # true 00:07:31.194 06:37:43 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:31.194 06:37:43 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@27 -- # exec 00:07:31.194 06:37:43 -- common/autotest_common.sh@29 -- # exec 00:07:31.194 06:37:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:31.194 06:37:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:31.194 06:37:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:31.194 06:37:43 -- common/autotest_common.sh@18 -- # set -x 00:07:31.194 06:37:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.194 06:37:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.194 06:37:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.194 06:37:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.194 06:37:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.194 06:37:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.194 06:37:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.194 06:37:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.194 06:37:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.194 06:37:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.194 06:37:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.194 06:37:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.194 06:37:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.194 06:37:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.194 06:37:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.194 06:37:43 -- scripts/common.sh@344 -- # : 1 00:07:31.194 06:37:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.194 06:37:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.194 06:37:43 -- scripts/common.sh@364 -- # decimal 1 00:07:31.194 06:37:43 -- scripts/common.sh@352 -- # local d=1 00:07:31.194 06:37:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.194 06:37:43 -- scripts/common.sh@354 -- # echo 1 00:07:31.194 06:37:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.194 06:37:43 -- scripts/common.sh@365 -- # decimal 2 00:07:31.194 06:37:43 -- scripts/common.sh@352 -- # local d=2 00:07:31.194 06:37:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.194 06:37:43 -- scripts/common.sh@354 -- # echo 2 00:07:31.194 06:37:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.194 06:37:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.194 06:37:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.194 06:37:43 -- scripts/common.sh@367 -- # return 0 00:07:31.194 06:37:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.194 --rc genhtml_branch_coverage=1 00:07:31.194 --rc genhtml_function_coverage=1 00:07:31.194 --rc genhtml_legend=1 00:07:31.194 --rc geninfo_all_blocks=1 00:07:31.194 --rc geninfo_unexecuted_blocks=1 00:07:31.194 00:07:31.194 ' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.194 --rc genhtml_branch_coverage=1 00:07:31.194 --rc genhtml_function_coverage=1 00:07:31.194 --rc genhtml_legend=1 00:07:31.194 --rc geninfo_all_blocks=1 00:07:31.194 --rc geninfo_unexecuted_blocks=1 00:07:31.194 00:07:31.194 ' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.194 --rc genhtml_branch_coverage=1 00:07:31.194 --rc genhtml_function_coverage=1 00:07:31.194 --rc genhtml_legend=1 00:07:31.194 --rc geninfo_all_blocks=1 00:07:31.194 --rc geninfo_unexecuted_blocks=1 00:07:31.194 00:07:31.194 ' 00:07:31.194 06:37:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.194 --rc genhtml_branch_coverage=1 00:07:31.194 --rc genhtml_function_coverage=1 00:07:31.194 --rc genhtml_legend=1 00:07:31.194 --rc geninfo_all_blocks=1 00:07:31.194 --rc geninfo_unexecuted_blocks=1 00:07:31.194 00:07:31.194 ' 00:07:31.194 06:37:43 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.194 06:37:43 -- nvmf/common.sh@7 -- # uname -s 00:07:31.194 06:37:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.194 06:37:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.194 06:37:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.194 06:37:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.194 06:37:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.194 06:37:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.194 06:37:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.194 06:37:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.194 06:37:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.194 06:37:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.194 06:37:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:31.194 06:37:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:07:31.194 06:37:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.194 06:37:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.194 06:37:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.194 06:37:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.194 06:37:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.195 06:37:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.195 06:37:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.195 06:37:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.195 06:37:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.195 06:37:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.195 06:37:43 -- paths/export.sh@5 -- # export PATH 00:07:31.195 06:37:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.195 06:37:43 -- nvmf/common.sh@46 -- # : 0 00:07:31.195 06:37:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:31.195 06:37:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:31.195 06:37:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:31.195 06:37:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.195 06:37:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.195 06:37:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:31.195 06:37:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:31.195 06:37:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:31.195 06:37:43 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:31.195 06:37:43 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:31.195 06:37:43 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:31.195 06:37:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:31.195 06:37:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.195 06:37:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:31.195 06:37:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:31.195 06:37:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:31.195 06:37:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.195 06:37:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.195 06:37:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.195 06:37:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:31.195 06:37:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.195 06:37:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.195 06:37:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.195 06:37:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:31.195 06:37:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.195 06:37:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.195 06:37:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.195 06:37:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.195 06:37:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.195 06:37:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.195 06:37:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.195 06:37:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.195 06:37:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:31.195 06:37:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:31.195 Cannot find device "nvmf_tgt_br" 00:07:31.195 06:37:44 -- nvmf/common.sh@154 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.195 Cannot find device "nvmf_tgt_br2" 00:07:31.195 06:37:44 -- nvmf/common.sh@155 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:31.195 06:37:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:31.195 Cannot find device "nvmf_tgt_br" 00:07:31.195 06:37:44 -- nvmf/common.sh@157 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:31.195 Cannot find device "nvmf_tgt_br2" 00:07:31.195 06:37:44 -- nvmf/common.sh@158 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:31.195 06:37:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:31.195 06:37:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.195 06:37:44 -- nvmf/common.sh@161 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.195 06:37:44 -- nvmf/common.sh@162 -- # true 00:07:31.195 06:37:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.195 06:37:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.195 06:37:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.195 06:37:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.195 06:37:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.195 06:37:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.195 06:37:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.195 06:37:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.195 06:37:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.195 06:37:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:31.195 06:37:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:31.195 06:37:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:31.195 06:37:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:31.195 06:37:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.195 06:37:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.195 06:37:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.195 06:37:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:31.195 06:37:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:31.195 06:37:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.195 06:37:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.195 06:37:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.195 06:37:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.195 06:37:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.195 06:37:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:31.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:31.195 00:07:31.195 --- 10.0.0.2 ping statistics --- 00:07:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.195 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:31.195 06:37:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:31.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:31.195 00:07:31.195 --- 10.0.0.3 ping statistics --- 00:07:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.195 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:31.195 06:37:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:31.195 00:07:31.195 --- 10.0.0.1 ping statistics --- 00:07:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.195 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:31.195 06:37:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.195 06:37:44 -- nvmf/common.sh@421 -- # return 0 00:07:31.195 06:37:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:31.195 06:37:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.195 06:37:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:31.195 06:37:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.195 06:37:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:31.195 06:37:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:31.195 06:37:44 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:31.195 06:37:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.196 06:37:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.196 06:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 ************************************ 00:07:31.196 START TEST nvmf_filesystem_no_in_capsule 00:07:31.196 ************************************ 00:07:31.196 06:37:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:31.196 06:37:44 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:31.196 06:37:44 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.196 06:37:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:31.196 06:37:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.196 06:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.196 06:37:44 -- nvmf/common.sh@469 -- # nvmfpid=60671 00:07:31.196 06:37:44 -- nvmf/common.sh@470 -- # waitforlisten 60671 00:07:31.196 06:37:44 -- common/autotest_common.sh@829 -- # '[' -z 60671 ']' 00:07:31.196 06:37:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.196 06:37:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.196 06:37:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.196 06:37:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.196 06:37:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.196 06:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 [2024-12-14 06:37:44.436826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.196 [2024-12-14 06:37:44.436925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.196 [2024-12-14 06:37:44.578467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.196 [2024-12-14 06:37:44.705415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.196 [2024-12-14 06:37:44.705893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.196 [2024-12-14 06:37:44.706059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.196 [2024-12-14 06:37:44.706222] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.196 [2024-12-14 06:37:44.706579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.196 [2024-12-14 06:37:44.706712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.196 [2024-12-14 06:37:44.706784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.196 [2024-12-14 06:37:44.706786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.762 06:37:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.762 06:37:45 -- common/autotest_common.sh@862 -- # return 0 00:07:31.762 06:37:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:31.762 06:37:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.762 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.762 06:37:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.762 06:37:45 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.762 06:37:45 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.762 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.762 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.762 [2024-12-14 06:37:45.523095] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.762 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.762 06:37:45 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.762 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.762 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.020 Malloc1 00:07:32.020 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.020 06:37:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.020 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.020 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.020 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.020 06:37:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.020 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.020 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.020 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.020 06:37:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.020 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.020 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.020 [2024-12-14 06:37:45.778379] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.020 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.020 06:37:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:32.020 06:37:45 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:32.020 06:37:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:32.020 06:37:45 -- common/autotest_common.sh@1369 -- # local bs 00:07:32.020 06:37:45 -- common/autotest_common.sh@1370 -- # local nb 00:07:32.020 06:37:45 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:32.020 06:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.020 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.020 06:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.020 06:37:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:32.020 { 00:07:32.020 "aliases": [ 00:07:32.020 "bd383541-b1ed-4f76-bc2f-ffb4c988c9c5" 00:07:32.020 ], 00:07:32.020 "assigned_rate_limits": { 00:07:32.021 "r_mbytes_per_sec": 0, 00:07:32.021 "rw_ios_per_sec": 0, 00:07:32.021 "rw_mbytes_per_sec": 0, 00:07:32.021 "w_mbytes_per_sec": 0 00:07:32.021 }, 00:07:32.021 "block_size": 512, 00:07:32.021 "claim_type": "exclusive_write", 00:07:32.021 "claimed": true, 00:07:32.021 "driver_specific": {}, 00:07:32.021 "memory_domains": [ 00:07:32.021 { 00:07:32.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.021 "dma_device_type": 2 00:07:32.021 } 00:07:32.021 ], 00:07:32.021 "name": "Malloc1", 00:07:32.021 "num_blocks": 1048576, 00:07:32.021 "product_name": "Malloc disk", 00:07:32.021 "supported_io_types": { 00:07:32.021 "abort": true, 00:07:32.021 "compare": false, 00:07:32.021 "compare_and_write": false, 00:07:32.021 "flush": true, 00:07:32.021 "nvme_admin": false, 00:07:32.021 "nvme_io": false, 00:07:32.021 "read": true, 00:07:32.021 "reset": true, 00:07:32.021 "unmap": true, 00:07:32.021 "write": true, 00:07:32.021 "write_zeroes": true 00:07:32.021 }, 00:07:32.021 "uuid": "bd383541-b1ed-4f76-bc2f-ffb4c988c9c5", 00:07:32.021 "zoned": false 00:07:32.021 } 00:07:32.021 ]' 00:07:32.021 06:37:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:32.021 06:37:45 -- common/autotest_common.sh@1372 -- # bs=512 00:07:32.021 06:37:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:32.021 06:37:45 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:32.021 06:37:45 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:32.021 06:37:45 -- common/autotest_common.sh@1377 -- # echo 512 00:07:32.021 06:37:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:32.021 06:37:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.279 06:37:46 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.279 06:37:46 -- common/autotest_common.sh@1187 -- # local i=0 00:07:32.279 06:37:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.279 06:37:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:32.279 06:37:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:34.179 06:37:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:34.179 06:37:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:34.179 06:37:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.179 06:37:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:34.179 06:37:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.179 06:37:48 -- common/autotest_common.sh@1197 -- # return 0 00:07:34.179 06:37:48 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.179 06:37:48 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.179 06:37:48 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.179 06:37:48 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.179 06:37:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.179 06:37:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.179 06:37:48 -- setup/common.sh@80 -- # echo 536870912 00:07:34.179 06:37:48 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.179 06:37:48 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.179 06:37:48 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.179 06:37:48 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.437 06:37:48 -- target/filesystem.sh@69 -- # partprobe 00:07:34.437 06:37:48 -- target/filesystem.sh@70 -- # sleep 1 00:07:35.372 06:37:49 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:35.372 06:37:49 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.372 06:37:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:35.372 06:37:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.372 06:37:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.372 ************************************ 00:07:35.372 START TEST filesystem_ext4 00:07:35.372 ************************************ 00:07:35.372 06:37:49 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:35.372 06:37:49 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:35.372 06:37:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.372 06:37:49 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:35.372 06:37:49 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:35.372 06:37:49 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:35.372 06:37:49 -- common/autotest_common.sh@914 -- # local i=0 00:07:35.372 06:37:49 -- common/autotest_common.sh@915 -- # local force 00:07:35.372 06:37:49 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:35.372 06:37:49 -- common/autotest_common.sh@918 -- # force=-F 00:07:35.372 06:37:49 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:35.372 mke2fs 1.47.0 (5-Feb-2023) 00:07:35.630 Discarding device blocks: 0/522240 done 00:07:35.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:35.630 Filesystem UUID: 78d54807-a2c8-40ea-9856-8c35d418431c 00:07:35.630 Superblock backups stored on blocks: 00:07:35.630 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:35.630 00:07:35.630 Allocating group tables: 0/64 done 00:07:35.630 Writing inode tables: 0/64 done 00:07:35.630 Creating journal (8192 blocks): done 00:07:35.630 Writing superblocks and filesystem accounting information: 0/64 done 00:07:35.630 00:07:35.630 06:37:49 -- common/autotest_common.sh@931 -- # return 0 00:07:35.630 06:37:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.943 06:37:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.943 06:37:54 -- target/filesystem.sh@25 -- # sync 00:07:41.202 06:37:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.202 06:37:54 -- target/filesystem.sh@27 -- # sync 00:07:41.202 06:37:54 -- target/filesystem.sh@29 -- # i=0 00:07:41.202 06:37:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.202 06:37:54 -- target/filesystem.sh@37 -- # kill -0 60671 00:07:41.202 06:37:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.202 06:37:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.202 06:37:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.202 06:37:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.202 00:07:41.202 real 0m5.679s 00:07:41.202 user 0m0.026s 00:07:41.202 sys 0m0.064s 00:07:41.202 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.202 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 ************************************ 00:07:41.202 END TEST filesystem_ext4 00:07:41.202 ************************************ 00:07:41.202 06:37:55 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.202 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:41.202 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.202 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 ************************************ 00:07:41.202 START TEST filesystem_btrfs 00:07:41.202 ************************************ 00:07:41.202 06:37:55 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.202 06:37:55 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.202 06:37:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.202 06:37:55 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.202 06:37:55 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:41.202 06:37:55 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:41.202 06:37:55 -- common/autotest_common.sh@914 -- # local i=0 00:07:41.202 06:37:55 -- common/autotest_common.sh@915 -- # local force 00:07:41.202 06:37:55 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:41.202 06:37:55 -- common/autotest_common.sh@920 -- # force=-f 00:07:41.202 06:37:55 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.460 btrfs-progs v6.8.1 00:07:41.460 See https://btrfs.readthedocs.io for more information. 00:07:41.460 00:07:41.460 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.460 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.460 this does not affect your deployments: 00:07:41.460 - DUP for metadata (-m dup) 00:07:41.460 - enabled no-holes (-O no-holes) 00:07:41.460 - enabled free-space-tree (-R free-space-tree) 00:07:41.460 00:07:41.460 Label: (null) 00:07:41.460 UUID: 4cd6bfee-05b2-49fb-b018-d0f0f72ad0a5 00:07:41.460 Node size: 16384 00:07:41.460 Sector size: 4096 (CPU page size: 4096) 00:07:41.460 Filesystem size: 510.00MiB 00:07:41.460 Block group profiles: 00:07:41.460 Data: single 8.00MiB 00:07:41.460 Metadata: DUP 32.00MiB 00:07:41.460 System: DUP 8.00MiB 00:07:41.460 SSD detected: yes 00:07:41.460 Zoned device: no 00:07:41.460 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.460 Checksum: crc32c 00:07:41.460 Number of devices: 1 00:07:41.460 Devices: 00:07:41.460 ID SIZE PATH 00:07:41.460 1 510.00MiB /dev/nvme0n1p1 00:07:41.460 00:07:41.460 06:37:55 -- common/autotest_common.sh@931 -- # return 0 00:07:41.460 06:37:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.460 06:37:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.460 06:37:55 -- target/filesystem.sh@25 -- # sync 00:07:41.460 06:37:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.460 06:37:55 -- target/filesystem.sh@27 -- # sync 00:07:41.460 06:37:55 -- target/filesystem.sh@29 -- # i=0 00:07:41.460 06:37:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.460 06:37:55 -- target/filesystem.sh@37 -- # kill -0 60671 00:07:41.460 06:37:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.460 06:37:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.460 06:37:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.460 06:37:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.460 00:07:41.460 real 0m0.277s 00:07:41.460 user 0m0.016s 00:07:41.460 sys 0m0.069s 00:07:41.460 06:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.460 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.460 ************************************ 00:07:41.460 END TEST filesystem_btrfs 00:07:41.460 ************************************ 00:07:41.460 06:37:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.460 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:41.460 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.460 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.460 ************************************ 00:07:41.460 START TEST filesystem_xfs 00:07:41.460 ************************************ 00:07:41.460 06:37:55 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.460 06:37:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.460 06:37:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.460 06:37:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.461 06:37:55 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:41.461 06:37:55 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:41.461 06:37:55 -- common/autotest_common.sh@914 -- # local i=0 00:07:41.461 06:37:55 -- common/autotest_common.sh@915 -- # local force 00:07:41.461 06:37:55 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:41.461 06:37:55 -- common/autotest_common.sh@920 -- # force=-f 00:07:41.461 06:37:55 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.719 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.719 = sectsz=512 attr=2, projid32bit=1 00:07:41.719 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.719 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.719 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.719 = sunit=0 swidth=0 blks 00:07:41.719 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.719 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.719 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.719 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.285 Discarding blocks...Done. 00:07:42.285 06:37:56 -- common/autotest_common.sh@931 -- # return 0 00:07:42.285 06:37:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.818 06:37:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.818 06:37:58 -- target/filesystem.sh@25 -- # sync 00:07:44.818 06:37:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.818 06:37:58 -- target/filesystem.sh@27 -- # sync 00:07:44.818 06:37:58 -- target/filesystem.sh@29 -- # i=0 00:07:44.818 06:37:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.818 06:37:58 -- target/filesystem.sh@37 -- # kill -0 60671 00:07:44.818 06:37:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.818 06:37:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.818 06:37:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.818 06:37:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.818 00:07:44.818 real 0m3.173s 00:07:44.818 user 0m0.024s 00:07:44.818 sys 0m0.059s 00:07:44.818 06:37:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.818 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.818 ************************************ 00:07:44.818 END TEST filesystem_xfs 00:07:44.818 ************************************ 00:07:44.818 06:37:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.818 06:37:58 -- target/filesystem.sh@93 -- # sync 00:07:44.818 06:37:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.818 06:37:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.818 06:37:58 -- common/autotest_common.sh@1208 -- # local i=0 00:07:44.818 06:37:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:44.818 06:37:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.818 06:37:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:44.818 06:37:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.818 06:37:58 -- common/autotest_common.sh@1220 -- # return 0 00:07:44.818 06:37:58 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.818 06:37:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.818 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.818 06:37:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.818 06:37:58 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.818 06:37:58 -- target/filesystem.sh@101 -- # killprocess 60671 00:07:44.818 06:37:58 -- common/autotest_common.sh@936 -- # '[' -z 60671 ']' 00:07:44.818 06:37:58 -- common/autotest_common.sh@940 -- # kill -0 60671 00:07:44.818 06:37:58 -- common/autotest_common.sh@941 -- # uname 00:07:44.818 06:37:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.818 06:37:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60671 00:07:44.818 06:37:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:44.818 06:37:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:44.818 killing process with pid 60671 00:07:44.818 06:37:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60671' 00:07:44.818 06:37:58 -- common/autotest_common.sh@955 -- # kill 60671 00:07:44.818 06:37:58 -- common/autotest_common.sh@960 -- # wait 60671 00:07:45.385 06:37:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:45.385 00:07:45.385 real 0m14.938s 00:07:45.385 user 0m56.867s 00:07:45.385 sys 0m2.136s 00:07:45.385 06:37:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.385 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.385 ************************************ 00:07:45.385 END TEST nvmf_filesystem_no_in_capsule 00:07:45.385 ************************************ 00:07:45.385 06:37:59 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:45.385 06:37:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.385 06:37:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.385 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.385 ************************************ 00:07:45.385 START TEST nvmf_filesystem_in_capsule 00:07:45.386 ************************************ 00:07:45.386 06:37:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:45.386 06:37:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:45.386 06:37:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:45.386 06:37:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:45.386 06:37:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.386 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.386 06:37:59 -- nvmf/common.sh@469 -- # nvmfpid=61053 00:07:45.386 06:37:59 -- nvmf/common.sh@470 -- # waitforlisten 61053 00:07:45.386 06:37:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.386 06:37:59 -- common/autotest_common.sh@829 -- # '[' -z 61053 ']' 00:07:45.386 06:37:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.386 06:37:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.386 06:37:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.386 06:37:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.386 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.645 [2024-12-14 06:37:59.417672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.645 [2024-12-14 06:37:59.417754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.645 [2024-12-14 06:37:59.552043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.902 [2024-12-14 06:37:59.644246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.902 [2024-12-14 06:37:59.644438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.902 [2024-12-14 06:37:59.644451] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.902 [2024-12-14 06:37:59.644459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.902 [2024-12-14 06:37:59.644599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.902 [2024-12-14 06:37:59.645030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.902 [2024-12-14 06:37:59.645483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.902 [2024-12-14 06:37:59.645542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.908 06:38:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.908 06:38:00 -- common/autotest_common.sh@862 -- # return 0 00:07:46.908 06:38:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:46.908 06:38:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.908 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 06:38:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.908 06:38:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:46.908 06:38:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:46.908 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.908 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 [2024-12-14 06:38:00.518590] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.908 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.908 06:38:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:46.908 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.908 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 Malloc1 00:07:46.908 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.908 06:38:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.908 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.908 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.908 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.908 06:38:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:46.908 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.909 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.909 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.909 06:38:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.909 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.909 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.909 [2024-12-14 06:38:00.760549] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.909 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.909 06:38:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:46.909 06:38:00 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:46.909 06:38:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:46.909 06:38:00 -- common/autotest_common.sh@1369 -- # local bs 00:07:46.909 06:38:00 -- common/autotest_common.sh@1370 -- # local nb 00:07:46.909 06:38:00 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:46.909 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.909 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.909 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.909 06:38:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:46.909 { 00:07:46.909 "aliases": [ 00:07:46.909 "3f658329-2672-4b44-ab26-e977db2b576c" 00:07:46.909 ], 00:07:46.909 "assigned_rate_limits": { 00:07:46.909 "r_mbytes_per_sec": 0, 00:07:46.909 "rw_ios_per_sec": 0, 00:07:46.909 "rw_mbytes_per_sec": 0, 00:07:46.909 "w_mbytes_per_sec": 0 00:07:46.909 }, 00:07:46.909 "block_size": 512, 00:07:46.909 "claim_type": "exclusive_write", 00:07:46.909 "claimed": true, 00:07:46.909 "driver_specific": {}, 00:07:46.909 "memory_domains": [ 00:07:46.909 { 00:07:46.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.909 "dma_device_type": 2 00:07:46.909 } 00:07:46.909 ], 00:07:46.909 "name": "Malloc1", 00:07:46.909 "num_blocks": 1048576, 00:07:46.909 "product_name": "Malloc disk", 00:07:46.909 "supported_io_types": { 00:07:46.909 "abort": true, 00:07:46.909 "compare": false, 00:07:46.909 "compare_and_write": false, 00:07:46.909 "flush": true, 00:07:46.909 "nvme_admin": false, 00:07:46.909 "nvme_io": false, 00:07:46.909 "read": true, 00:07:46.909 "reset": true, 00:07:46.909 "unmap": true, 00:07:46.909 "write": true, 00:07:46.909 "write_zeroes": true 00:07:46.909 }, 00:07:46.909 "uuid": "3f658329-2672-4b44-ab26-e977db2b576c", 00:07:46.909 "zoned": false 00:07:46.909 } 00:07:46.909 ]' 00:07:46.909 06:38:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:46.909 06:38:00 -- common/autotest_common.sh@1372 -- # bs=512 00:07:46.909 06:38:00 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:46.909 06:38:00 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:46.909 06:38:00 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:46.909 06:38:00 -- common/autotest_common.sh@1377 -- # echo 512 00:07:46.909 06:38:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:46.909 06:38:00 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.168 06:38:01 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.168 06:38:01 -- common/autotest_common.sh@1187 -- # local i=0 00:07:47.168 06:38:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.168 06:38:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:47.168 06:38:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:49.699 06:38:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:49.699 06:38:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:49.699 06:38:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.699 06:38:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:49.699 06:38:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.699 06:38:03 -- common/autotest_common.sh@1197 -- # return 0 00:07:49.699 06:38:03 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:49.699 06:38:03 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:49.699 06:38:03 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:49.699 06:38:03 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:49.699 06:38:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:49.699 06:38:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:49.699 06:38:03 -- setup/common.sh@80 -- # echo 536870912 00:07:49.699 06:38:03 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:49.699 06:38:03 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:49.699 06:38:03 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:49.699 06:38:03 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:49.699 06:38:03 -- target/filesystem.sh@69 -- # partprobe 00:07:49.699 06:38:03 -- target/filesystem.sh@70 -- # sleep 1 00:07:50.266 06:38:04 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:50.266 06:38:04 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:50.266 06:38:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.266 06:38:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.266 06:38:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.266 ************************************ 00:07:50.266 START TEST filesystem_in_capsule_ext4 00:07:50.266 ************************************ 00:07:50.266 06:38:04 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:50.266 06:38:04 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:50.266 06:38:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.266 06:38:04 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:50.266 06:38:04 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:50.266 06:38:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.266 06:38:04 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.266 06:38:04 -- common/autotest_common.sh@915 -- # local force 00:07:50.266 06:38:04 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:50.266 06:38:04 -- common/autotest_common.sh@918 -- # force=-F 00:07:50.266 06:38:04 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:50.266 mke2fs 1.47.0 (5-Feb-2023) 00:07:50.525 Discarding device blocks: 0/522240 done 00:07:50.525 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:50.525 Filesystem UUID: c935db63-9e7e-4ed8-af17-1da5ff3772db 00:07:50.525 Superblock backups stored on blocks: 00:07:50.525 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:50.525 00:07:50.525 Allocating group tables: 0/64 done 00:07:50.525 Writing inode tables: 0/64 done 00:07:50.525 Creating journal (8192 blocks): done 00:07:50.525 Writing superblocks and filesystem accounting information: 0/64 done 00:07:50.525 00:07:50.525 06:38:04 -- common/autotest_common.sh@931 -- # return 0 00:07:50.525 06:38:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.792 06:38:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.051 06:38:09 -- target/filesystem.sh@25 -- # sync 00:07:56.051 06:38:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.051 06:38:09 -- target/filesystem.sh@27 -- # sync 00:07:56.051 06:38:09 -- target/filesystem.sh@29 -- # i=0 00:07:56.051 06:38:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.051 06:38:09 -- target/filesystem.sh@37 -- # kill -0 61053 00:07:56.051 06:38:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.051 06:38:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.051 06:38:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.051 06:38:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.051 00:07:56.051 real 0m5.690s 00:07:56.051 user 0m0.022s 00:07:56.051 sys 0m0.066s 00:07:56.051 06:38:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.051 06:38:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.051 ************************************ 00:07:56.051 END TEST filesystem_in_capsule_ext4 00:07:56.051 ************************************ 00:07:56.051 06:38:09 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:56.051 06:38:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:56.051 06:38:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.051 06:38:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.051 ************************************ 00:07:56.051 START TEST filesystem_in_capsule_btrfs 00:07:56.051 ************************************ 00:07:56.051 06:38:09 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:56.051 06:38:09 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:56.051 06:38:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.051 06:38:09 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:56.051 06:38:09 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:56.051 06:38:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.051 06:38:09 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.051 06:38:09 -- common/autotest_common.sh@915 -- # local force 00:07:56.051 06:38:09 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:56.051 06:38:09 -- common/autotest_common.sh@920 -- # force=-f 00:07:56.051 06:38:09 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:56.309 btrfs-progs v6.8.1 00:07:56.309 See https://btrfs.readthedocs.io for more information. 00:07:56.309 00:07:56.309 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:56.309 NOTE: several default settings have changed in version 5.15, please make sure 00:07:56.309 this does not affect your deployments: 00:07:56.309 - DUP for metadata (-m dup) 00:07:56.309 - enabled no-holes (-O no-holes) 00:07:56.309 - enabled free-space-tree (-R free-space-tree) 00:07:56.309 00:07:56.309 Label: (null) 00:07:56.309 UUID: d82dfbc5-be6b-470a-88ff-5fb0e65fdf09 00:07:56.309 Node size: 16384 00:07:56.309 Sector size: 4096 (CPU page size: 4096) 00:07:56.309 Filesystem size: 510.00MiB 00:07:56.309 Block group profiles: 00:07:56.309 Data: single 8.00MiB 00:07:56.309 Metadata: DUP 32.00MiB 00:07:56.309 System: DUP 8.00MiB 00:07:56.309 SSD detected: yes 00:07:56.309 Zoned device: no 00:07:56.309 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:56.309 Checksum: crc32c 00:07:56.309 Number of devices: 1 00:07:56.309 Devices: 00:07:56.309 ID SIZE PATH 00:07:56.309 1 510.00MiB /dev/nvme0n1p1 00:07:56.309 00:07:56.309 06:38:10 -- common/autotest_common.sh@931 -- # return 0 00:07:56.309 06:38:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.309 06:38:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.309 06:38:10 -- target/filesystem.sh@25 -- # sync 00:07:56.309 06:38:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.309 06:38:10 -- target/filesystem.sh@27 -- # sync 00:07:56.309 06:38:10 -- target/filesystem.sh@29 -- # i=0 00:07:56.309 06:38:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.309 06:38:10 -- target/filesystem.sh@37 -- # kill -0 61053 00:07:56.309 06:38:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.309 06:38:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.309 06:38:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.309 06:38:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.309 00:07:56.309 real 0m0.306s 00:07:56.309 user 0m0.020s 00:07:56.309 sys 0m0.059s 00:07:56.309 06:38:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.309 06:38:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 ************************************ 00:07:56.309 END TEST filesystem_in_capsule_btrfs 00:07:56.309 ************************************ 00:07:56.309 06:38:10 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:56.309 06:38:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:56.309 06:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.309 06:38:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 ************************************ 00:07:56.309 START TEST filesystem_in_capsule_xfs 00:07:56.309 ************************************ 00:07:56.309 06:38:10 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.309 06:38:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.309 06:38:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.309 06:38:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.309 06:38:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:56.309 06:38:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.309 06:38:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.309 06:38:10 -- common/autotest_common.sh@915 -- # local force 00:07:56.309 06:38:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:56.309 06:38:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:56.309 06:38:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:56.568 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:56.568 = sectsz=512 attr=2, projid32bit=1 00:07:56.568 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:56.568 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:56.568 data = bsize=4096 blocks=130560, imaxpct=25 00:07:56.568 = sunit=0 swidth=0 blks 00:07:56.568 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:56.568 log =internal log bsize=4096 blocks=16384, version=2 00:07:56.568 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:56.568 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:57.503 Discarding blocks...Done. 00:07:57.503 06:38:11 -- common/autotest_common.sh@931 -- # return 0 00:07:57.503 06:38:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.405 06:38:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.405 06:38:12 -- target/filesystem.sh@25 -- # sync 00:07:59.405 06:38:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.405 06:38:12 -- target/filesystem.sh@27 -- # sync 00:07:59.405 06:38:12 -- target/filesystem.sh@29 -- # i=0 00:07:59.405 06:38:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.405 06:38:12 -- target/filesystem.sh@37 -- # kill -0 61053 00:07:59.405 06:38:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.405 06:38:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.405 06:38:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.405 06:38:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.405 00:07:59.405 real 0m2.719s 00:07:59.405 user 0m0.028s 00:07:59.405 sys 0m0.053s 00:07:59.405 06:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.405 06:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:59.405 ************************************ 00:07:59.405 END TEST filesystem_in_capsule_xfs 00:07:59.405 ************************************ 00:07:59.405 06:38:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.405 06:38:13 -- target/filesystem.sh@93 -- # sync 00:07:59.405 06:38:13 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.405 06:38:13 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.405 06:38:13 -- common/autotest_common.sh@1208 -- # local i=0 00:07:59.405 06:38:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:59.405 06:38:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.405 06:38:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:59.405 06:38:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.405 06:38:13 -- common/autotest_common.sh@1220 -- # return 0 00:07:59.405 06:38:13 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.405 06:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.405 06:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:59.405 06:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.405 06:38:13 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.405 06:38:13 -- target/filesystem.sh@101 -- # killprocess 61053 00:07:59.405 06:38:13 -- common/autotest_common.sh@936 -- # '[' -z 61053 ']' 00:07:59.405 06:38:13 -- common/autotest_common.sh@940 -- # kill -0 61053 00:07:59.405 06:38:13 -- common/autotest_common.sh@941 -- # uname 00:07:59.405 06:38:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.405 06:38:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61053 00:07:59.405 06:38:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.405 06:38:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.405 killing process with pid 61053 00:07:59.405 06:38:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61053' 00:07:59.405 06:38:13 -- common/autotest_common.sh@955 -- # kill 61053 00:07:59.405 06:38:13 -- common/autotest_common.sh@960 -- # wait 61053 00:07:59.972 06:38:13 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:59.972 00:07:59.972 real 0m14.459s 00:07:59.972 user 0m55.305s 00:07:59.972 sys 0m2.003s 00:07:59.972 06:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.972 ************************************ 00:07:59.972 END TEST nvmf_filesystem_in_capsule 00:07:59.972 ************************************ 00:07:59.972 06:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 06:38:13 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:59.972 06:38:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:59.972 06:38:13 -- nvmf/common.sh@116 -- # sync 00:07:59.972 06:38:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:59.972 06:38:13 -- nvmf/common.sh@119 -- # set +e 00:07:59.972 06:38:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:59.972 06:38:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:59.972 rmmod nvme_tcp 00:07:59.972 rmmod nvme_fabrics 00:07:59.972 rmmod nvme_keyring 00:07:59.972 06:38:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:00.231 06:38:13 -- nvmf/common.sh@123 -- # set -e 00:08:00.231 06:38:13 -- nvmf/common.sh@124 -- # return 0 00:08:00.231 06:38:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:00.231 06:38:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:00.231 06:38:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:00.231 06:38:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:00.231 06:38:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.231 06:38:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:00.231 06:38:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.231 06:38:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.231 06:38:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.231 06:38:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:00.231 00:08:00.231 real 0m30.437s 00:08:00.231 user 1m52.572s 00:08:00.231 sys 0m4.582s 00:08:00.231 06:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.231 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 END TEST nvmf_filesystem 00:08:00.231 ************************************ 00:08:00.231 06:38:14 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.231 06:38:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:00.231 06:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.231 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 START TEST nvmf_discovery 00:08:00.231 ************************************ 00:08:00.231 06:38:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.231 * Looking for test storage... 00:08:00.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.231 06:38:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:00.231 06:38:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:00.231 06:38:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:00.231 06:38:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:00.231 06:38:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:00.231 06:38:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:00.231 06:38:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:00.231 06:38:14 -- scripts/common.sh@335 -- # IFS=.-: 00:08:00.231 06:38:14 -- scripts/common.sh@335 -- # read -ra ver1 00:08:00.231 06:38:14 -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.231 06:38:14 -- scripts/common.sh@336 -- # read -ra ver2 00:08:00.231 06:38:14 -- scripts/common.sh@337 -- # local 'op=<' 00:08:00.231 06:38:14 -- scripts/common.sh@339 -- # ver1_l=2 00:08:00.231 06:38:14 -- scripts/common.sh@340 -- # ver2_l=1 00:08:00.231 06:38:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:00.231 06:38:14 -- scripts/common.sh@343 -- # case "$op" in 00:08:00.231 06:38:14 -- scripts/common.sh@344 -- # : 1 00:08:00.231 06:38:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:00.231 06:38:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.231 06:38:14 -- scripts/common.sh@364 -- # decimal 1 00:08:00.490 06:38:14 -- scripts/common.sh@352 -- # local d=1 00:08:00.490 06:38:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.490 06:38:14 -- scripts/common.sh@354 -- # echo 1 00:08:00.490 06:38:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:00.490 06:38:14 -- scripts/common.sh@365 -- # decimal 2 00:08:00.490 06:38:14 -- scripts/common.sh@352 -- # local d=2 00:08:00.490 06:38:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.490 06:38:14 -- scripts/common.sh@354 -- # echo 2 00:08:00.490 06:38:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:00.490 06:38:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:00.490 06:38:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:00.490 06:38:14 -- scripts/common.sh@367 -- # return 0 00:08:00.490 06:38:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.490 06:38:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:00.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.490 --rc genhtml_branch_coverage=1 00:08:00.490 --rc genhtml_function_coverage=1 00:08:00.490 --rc genhtml_legend=1 00:08:00.490 --rc geninfo_all_blocks=1 00:08:00.490 --rc geninfo_unexecuted_blocks=1 00:08:00.490 00:08:00.490 ' 00:08:00.490 06:38:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:00.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.490 --rc genhtml_branch_coverage=1 00:08:00.490 --rc genhtml_function_coverage=1 00:08:00.490 --rc genhtml_legend=1 00:08:00.490 --rc geninfo_all_blocks=1 00:08:00.490 --rc geninfo_unexecuted_blocks=1 00:08:00.490 00:08:00.490 ' 00:08:00.490 06:38:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:00.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.490 --rc genhtml_branch_coverage=1 00:08:00.490 --rc genhtml_function_coverage=1 00:08:00.490 --rc genhtml_legend=1 00:08:00.490 --rc geninfo_all_blocks=1 00:08:00.490 --rc geninfo_unexecuted_blocks=1 00:08:00.490 00:08:00.490 ' 00:08:00.490 06:38:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:00.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.490 --rc genhtml_branch_coverage=1 00:08:00.490 --rc genhtml_function_coverage=1 00:08:00.490 --rc genhtml_legend=1 00:08:00.490 --rc geninfo_all_blocks=1 00:08:00.490 --rc geninfo_unexecuted_blocks=1 00:08:00.490 00:08:00.490 ' 00:08:00.490 06:38:14 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.490 06:38:14 -- nvmf/common.sh@7 -- # uname -s 00:08:00.490 06:38:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.490 06:38:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.490 06:38:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.490 06:38:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.490 06:38:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.490 06:38:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.490 06:38:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.490 06:38:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.490 06:38:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.490 06:38:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.490 06:38:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:00.490 06:38:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:00.491 06:38:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.491 06:38:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.491 06:38:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.491 06:38:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.491 06:38:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.491 06:38:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.491 06:38:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.491 06:38:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.491 06:38:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.491 06:38:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.491 06:38:14 -- paths/export.sh@5 -- # export PATH 00:08:00.491 06:38:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.491 06:38:14 -- nvmf/common.sh@46 -- # : 0 00:08:00.491 06:38:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.491 06:38:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.491 06:38:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.491 06:38:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.491 06:38:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.491 06:38:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.491 06:38:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.491 06:38:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.491 06:38:14 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:00.491 06:38:14 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:00.491 06:38:14 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:00.491 06:38:14 -- target/discovery.sh@15 -- # hash nvme 00:08:00.491 06:38:14 -- target/discovery.sh@20 -- # nvmftestinit 00:08:00.491 06:38:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.491 06:38:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.491 06:38:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.491 06:38:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.491 06:38:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.491 06:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.491 06:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.491 06:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.491 06:38:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:00.491 06:38:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:00.491 06:38:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:00.491 06:38:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:00.491 06:38:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:00.491 06:38:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:00.491 06:38:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.491 06:38:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.491 06:38:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.491 06:38:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:00.491 06:38:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.491 06:38:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.491 06:38:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.491 06:38:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.491 06:38:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.491 06:38:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.491 06:38:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.491 06:38:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.491 06:38:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:00.491 06:38:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:00.491 Cannot find device "nvmf_tgt_br" 00:08:00.491 06:38:14 -- nvmf/common.sh@154 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.491 Cannot find device "nvmf_tgt_br2" 00:08:00.491 06:38:14 -- nvmf/common.sh@155 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:00.491 06:38:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:00.491 Cannot find device "nvmf_tgt_br" 00:08:00.491 06:38:14 -- nvmf/common.sh@157 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:00.491 Cannot find device "nvmf_tgt_br2" 00:08:00.491 06:38:14 -- nvmf/common.sh@158 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:00.491 06:38:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:00.491 06:38:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.491 06:38:14 -- nvmf/common.sh@161 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.491 06:38:14 -- nvmf/common.sh@162 -- # true 00:08:00.491 06:38:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.491 06:38:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.491 06:38:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.491 06:38:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.491 06:38:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.491 06:38:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.491 06:38:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.750 06:38:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.750 06:38:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.750 06:38:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:00.750 06:38:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:00.750 06:38:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:00.750 06:38:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:00.750 06:38:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.750 06:38:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.750 06:38:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.750 06:38:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:00.750 06:38:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:00.750 06:38:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.750 06:38:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.750 06:38:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.750 06:38:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.750 06:38:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.750 06:38:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:00.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:00.750 00:08:00.750 --- 10.0.0.2 ping statistics --- 00:08:00.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.750 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:00.750 06:38:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:00.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:00.750 00:08:00.750 --- 10.0.0.3 ping statistics --- 00:08:00.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.750 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:00.750 06:38:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:00.750 00:08:00.750 --- 10.0.0.1 ping statistics --- 00:08:00.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.750 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:00.750 06:38:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.750 06:38:14 -- nvmf/common.sh@421 -- # return 0 00:08:00.750 06:38:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:00.750 06:38:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.750 06:38:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:00.750 06:38:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:00.750 06:38:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.750 06:38:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:00.750 06:38:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:00.750 06:38:14 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:00.750 06:38:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:00.750 06:38:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.750 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.750 06:38:14 -- nvmf/common.sh@469 -- # nvmfpid=61603 00:08:00.750 06:38:14 -- nvmf/common.sh@470 -- # waitforlisten 61603 00:08:00.750 06:38:14 -- common/autotest_common.sh@829 -- # '[' -z 61603 ']' 00:08:00.750 06:38:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.750 06:38:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.750 06:38:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.750 06:38:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.750 06:38:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.750 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.750 [2024-12-14 06:38:14.696704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.750 [2024-12-14 06:38:14.696810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.008 [2024-12-14 06:38:14.837450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.008 [2024-12-14 06:38:14.966760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.008 [2024-12-14 06:38:14.966997] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.008 [2024-12-14 06:38:14.967014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.008 [2024-12-14 06:38:14.967026] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.008 [2024-12-14 06:38:14.967147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.008 [2024-12-14 06:38:14.967805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.008 [2024-12-14 06:38:14.968668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.008 [2024-12-14 06:38:14.968722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.942 06:38:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.942 06:38:15 -- common/autotest_common.sh@862 -- # return 0 00:08:01.942 06:38:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:01.942 06:38:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.942 06:38:15 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 [2024-12-14 06:38:15.778305] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@26 -- # seq 1 4 00:08:01.942 06:38:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.942 06:38:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 Null1 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 [2024-12-14 06:38:15.833460] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.942 06:38:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 Null2 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.942 06:38:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:01.942 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.942 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.943 06:38:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 Null3 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.943 06:38:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 Null4 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.943 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.943 06:38:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:01.943 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.943 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.201 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.201 06:38:15 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.201 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.201 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.201 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.201 06:38:15 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.201 06:38:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.201 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.201 06:38:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.201 06:38:15 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 4420 00:08:02.201 00:08:02.201 Discovery Log Number of Records 6, Generation counter 6 00:08:02.201 =====Discovery Log Entry 0====== 00:08:02.201 trtype: tcp 00:08:02.201 adrfam: ipv4 00:08:02.201 subtype: current discovery subsystem 00:08:02.201 treq: not required 00:08:02.201 portid: 0 00:08:02.201 trsvcid: 4420 00:08:02.201 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.201 traddr: 10.0.0.2 00:08:02.201 eflags: explicit discovery connections, duplicate discovery information 00:08:02.201 sectype: none 00:08:02.201 =====Discovery Log Entry 1====== 00:08:02.201 trtype: tcp 00:08:02.201 adrfam: ipv4 00:08:02.201 subtype: nvme subsystem 00:08:02.201 treq: not required 00:08:02.201 portid: 0 00:08:02.201 trsvcid: 4420 00:08:02.201 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:02.201 traddr: 10.0.0.2 00:08:02.201 eflags: none 00:08:02.201 sectype: none 00:08:02.201 =====Discovery Log Entry 2====== 00:08:02.201 trtype: tcp 00:08:02.201 adrfam: ipv4 00:08:02.201 subtype: nvme subsystem 00:08:02.201 treq: not required 00:08:02.201 portid: 0 00:08:02.201 trsvcid: 4420 00:08:02.202 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:02.202 traddr: 10.0.0.2 00:08:02.202 eflags: none 00:08:02.202 sectype: none 00:08:02.202 =====Discovery Log Entry 3====== 00:08:02.202 trtype: tcp 00:08:02.202 adrfam: ipv4 00:08:02.202 subtype: nvme subsystem 00:08:02.202 treq: not required 00:08:02.202 portid: 0 00:08:02.202 trsvcid: 4420 00:08:02.202 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:02.202 traddr: 10.0.0.2 00:08:02.202 eflags: none 00:08:02.202 sectype: none 00:08:02.202 =====Discovery Log Entry 4====== 00:08:02.202 trtype: tcp 00:08:02.202 adrfam: ipv4 00:08:02.202 subtype: nvme subsystem 00:08:02.202 treq: not required 00:08:02.202 portid: 0 00:08:02.202 trsvcid: 4420 00:08:02.202 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:02.202 traddr: 10.0.0.2 00:08:02.202 eflags: none 00:08:02.202 sectype: none 00:08:02.202 =====Discovery Log Entry 5====== 00:08:02.202 trtype: tcp 00:08:02.202 adrfam: ipv4 00:08:02.202 subtype: discovery subsystem referral 00:08:02.202 treq: not required 00:08:02.202 portid: 0 00:08:02.202 trsvcid: 4430 00:08:02.202 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.202 traddr: 10.0.0.2 00:08:02.202 eflags: none 00:08:02.202 sectype: none 00:08:02.202 Perform nvmf subsystem discovery via RPC 00:08:02.202 06:38:16 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:02.202 06:38:16 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 [2024-12-14 06:38:16.069435] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:02.202 [ 00:08:02.202 { 00:08:02.202 "allow_any_host": true, 00:08:02.202 "hosts": [], 00:08:02.202 "listen_addresses": [ 00:08:02.202 { 00:08:02.202 "adrfam": "IPv4", 00:08:02.202 "traddr": "10.0.0.2", 00:08:02.202 "transport": "TCP", 00:08:02.202 "trsvcid": "4420", 00:08:02.202 "trtype": "TCP" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:02.202 "subtype": "Discovery" 00:08:02.202 }, 00:08:02.202 { 00:08:02.202 "allow_any_host": true, 00:08:02.202 "hosts": [], 00:08:02.202 "listen_addresses": [ 00:08:02.202 { 00:08:02.202 "adrfam": "IPv4", 00:08:02.202 "traddr": "10.0.0.2", 00:08:02.202 "transport": "TCP", 00:08:02.202 "trsvcid": "4420", 00:08:02.202 "trtype": "TCP" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "max_cntlid": 65519, 00:08:02.202 "max_namespaces": 32, 00:08:02.202 "min_cntlid": 1, 00:08:02.202 "model_number": "SPDK bdev Controller", 00:08:02.202 "namespaces": [ 00:08:02.202 { 00:08:02.202 "bdev_name": "Null1", 00:08:02.202 "name": "Null1", 00:08:02.202 "nguid": "8B97922C7FD2410F85766538EB0FEF6A", 00:08:02.202 "nsid": 1, 00:08:02.202 "uuid": "8b97922c-7fd2-410f-8576-6538eb0fef6a" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.202 "serial_number": "SPDK00000000000001", 00:08:02.202 "subtype": "NVMe" 00:08:02.202 }, 00:08:02.202 { 00:08:02.202 "allow_any_host": true, 00:08:02.202 "hosts": [], 00:08:02.202 "listen_addresses": [ 00:08:02.202 { 00:08:02.202 "adrfam": "IPv4", 00:08:02.202 "traddr": "10.0.0.2", 00:08:02.202 "transport": "TCP", 00:08:02.202 "trsvcid": "4420", 00:08:02.202 "trtype": "TCP" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "max_cntlid": 65519, 00:08:02.202 "max_namespaces": 32, 00:08:02.202 "min_cntlid": 1, 00:08:02.202 "model_number": "SPDK bdev Controller", 00:08:02.202 "namespaces": [ 00:08:02.202 { 00:08:02.202 "bdev_name": "Null2", 00:08:02.202 "name": "Null2", 00:08:02.202 "nguid": "6A431E2020E9461181F299521C7F2A65", 00:08:02.202 "nsid": 1, 00:08:02.202 "uuid": "6a431e20-20e9-4611-81f2-99521c7f2a65" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:02.202 "serial_number": "SPDK00000000000002", 00:08:02.202 "subtype": "NVMe" 00:08:02.202 }, 00:08:02.202 { 00:08:02.202 "allow_any_host": true, 00:08:02.202 "hosts": [], 00:08:02.202 "listen_addresses": [ 00:08:02.202 { 00:08:02.202 "adrfam": "IPv4", 00:08:02.202 "traddr": "10.0.0.2", 00:08:02.202 "transport": "TCP", 00:08:02.202 "trsvcid": "4420", 00:08:02.202 "trtype": "TCP" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "max_cntlid": 65519, 00:08:02.202 "max_namespaces": 32, 00:08:02.202 "min_cntlid": 1, 00:08:02.202 "model_number": "SPDK bdev Controller", 00:08:02.202 "namespaces": [ 00:08:02.202 { 00:08:02.202 "bdev_name": "Null3", 00:08:02.202 "name": "Null3", 00:08:02.202 "nguid": "9B7452A5A32B434F80A535BFDDA29933", 00:08:02.202 "nsid": 1, 00:08:02.202 "uuid": "9b7452a5-a32b-434f-80a5-35bfdda29933" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:02.202 "serial_number": "SPDK00000000000003", 00:08:02.202 "subtype": "NVMe" 00:08:02.202 }, 00:08:02.202 { 00:08:02.202 "allow_any_host": true, 00:08:02.202 "hosts": [], 00:08:02.202 "listen_addresses": [ 00:08:02.202 { 00:08:02.202 "adrfam": "IPv4", 00:08:02.202 "traddr": "10.0.0.2", 00:08:02.202 "transport": "TCP", 00:08:02.202 "trsvcid": "4420", 00:08:02.202 "trtype": "TCP" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "max_cntlid": 65519, 00:08:02.202 "max_namespaces": 32, 00:08:02.202 "min_cntlid": 1, 00:08:02.202 "model_number": "SPDK bdev Controller", 00:08:02.202 "namespaces": [ 00:08:02.202 { 00:08:02.202 "bdev_name": "Null4", 00:08:02.202 "name": "Null4", 00:08:02.202 "nguid": "9F5121DFAC3C4696A3A1D5659ACA8544", 00:08:02.202 "nsid": 1, 00:08:02.202 "uuid": "9f5121df-ac3c-4696-a3a1-d5659aca8544" 00:08:02.202 } 00:08:02.202 ], 00:08:02.202 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:02.202 "serial_number": "SPDK00000000000004", 00:08:02.202 "subtype": "NVMe" 00:08:02.202 } 00:08:02.202 ] 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@42 -- # seq 1 4 00:08:02.202 06:38:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.202 06:38:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.202 06:38:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.202 06:38:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.202 06:38:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.202 06:38:16 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:02.202 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.202 06:38:16 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:02.202 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.202 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.461 06:38:16 -- target/discovery.sh@49 -- # check_bdevs= 00:08:02.461 06:38:16 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:02.461 06:38:16 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:02.461 06:38:16 -- target/discovery.sh@57 -- # nvmftestfini 00:08:02.461 06:38:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:02.461 06:38:16 -- nvmf/common.sh@116 -- # sync 00:08:02.461 06:38:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:02.461 06:38:16 -- nvmf/common.sh@119 -- # set +e 00:08:02.461 06:38:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:02.461 06:38:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:02.461 rmmod nvme_tcp 00:08:02.461 rmmod nvme_fabrics 00:08:02.461 rmmod nvme_keyring 00:08:02.461 06:38:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:02.461 06:38:16 -- nvmf/common.sh@123 -- # set -e 00:08:02.461 06:38:16 -- nvmf/common.sh@124 -- # return 0 00:08:02.461 06:38:16 -- nvmf/common.sh@477 -- # '[' -n 61603 ']' 00:08:02.461 06:38:16 -- nvmf/common.sh@478 -- # killprocess 61603 00:08:02.461 06:38:16 -- common/autotest_common.sh@936 -- # '[' -z 61603 ']' 00:08:02.461 06:38:16 -- common/autotest_common.sh@940 -- # kill -0 61603 00:08:02.461 06:38:16 -- common/autotest_common.sh@941 -- # uname 00:08:02.461 06:38:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.461 06:38:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61603 00:08:02.461 06:38:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:02.461 killing process with pid 61603 00:08:02.461 06:38:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:02.461 06:38:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61603' 00:08:02.461 06:38:16 -- common/autotest_common.sh@955 -- # kill 61603 00:08:02.461 [2024-12-14 06:38:16.344619] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:02.461 06:38:16 -- common/autotest_common.sh@960 -- # wait 61603 00:08:02.719 06:38:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:02.719 06:38:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:02.719 06:38:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:02.719 06:38:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.719 06:38:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:02.719 06:38:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.719 06:38:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.719 06:38:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.978 06:38:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:02.978 00:08:02.978 real 0m2.658s 00:08:02.978 user 0m7.043s 00:08:02.978 sys 0m0.723s 00:08:02.978 06:38:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.978 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.978 ************************************ 00:08:02.978 END TEST nvmf_discovery 00:08:02.978 ************************************ 00:08:02.978 06:38:16 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.978 06:38:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.978 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.978 ************************************ 00:08:02.978 START TEST nvmf_referrals 00:08:02.978 ************************************ 00:08:02.978 06:38:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.978 * Looking for test storage... 00:08:02.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.978 06:38:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.978 06:38:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.978 06:38:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.978 06:38:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.978 06:38:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.978 06:38:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.978 06:38:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.978 06:38:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.978 06:38:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.978 06:38:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.978 06:38:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.978 06:38:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.978 06:38:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.978 06:38:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.978 06:38:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.978 06:38:16 -- scripts/common.sh@344 -- # : 1 00:08:02.978 06:38:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.978 06:38:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.978 06:38:16 -- scripts/common.sh@364 -- # decimal 1 00:08:02.978 06:38:16 -- scripts/common.sh@352 -- # local d=1 00:08:02.978 06:38:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.978 06:38:16 -- scripts/common.sh@354 -- # echo 1 00:08:02.978 06:38:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.978 06:38:16 -- scripts/common.sh@365 -- # decimal 2 00:08:02.978 06:38:16 -- scripts/common.sh@352 -- # local d=2 00:08:02.978 06:38:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.978 06:38:16 -- scripts/common.sh@354 -- # echo 2 00:08:02.978 06:38:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.978 06:38:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.978 06:38:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.978 06:38:16 -- scripts/common.sh@367 -- # return 0 00:08:02.978 06:38:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.978 --rc genhtml_branch_coverage=1 00:08:02.978 --rc genhtml_function_coverage=1 00:08:02.978 --rc genhtml_legend=1 00:08:02.978 --rc geninfo_all_blocks=1 00:08:02.978 --rc geninfo_unexecuted_blocks=1 00:08:02.978 00:08:02.978 ' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.978 --rc genhtml_branch_coverage=1 00:08:02.978 --rc genhtml_function_coverage=1 00:08:02.978 --rc genhtml_legend=1 00:08:02.978 --rc geninfo_all_blocks=1 00:08:02.978 --rc geninfo_unexecuted_blocks=1 00:08:02.978 00:08:02.978 ' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.978 --rc genhtml_branch_coverage=1 00:08:02.978 --rc genhtml_function_coverage=1 00:08:02.978 --rc genhtml_legend=1 00:08:02.978 --rc geninfo_all_blocks=1 00:08:02.978 --rc geninfo_unexecuted_blocks=1 00:08:02.978 00:08:02.978 ' 00:08:02.978 06:38:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.978 --rc genhtml_branch_coverage=1 00:08:02.978 --rc genhtml_function_coverage=1 00:08:02.978 --rc genhtml_legend=1 00:08:02.978 --rc geninfo_all_blocks=1 00:08:02.978 --rc geninfo_unexecuted_blocks=1 00:08:02.978 00:08:02.978 ' 00:08:02.978 06:38:16 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.978 06:38:16 -- nvmf/common.sh@7 -- # uname -s 00:08:02.978 06:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.978 06:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.978 06:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.978 06:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.978 06:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.978 06:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.978 06:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.237 06:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.237 06:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.237 06:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.237 06:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:03.237 06:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:03.237 06:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.237 06:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.237 06:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.237 06:38:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.237 06:38:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.237 06:38:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.237 06:38:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.237 06:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.237 06:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.237 06:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.237 06:38:16 -- paths/export.sh@5 -- # export PATH 00:08:03.237 06:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.237 06:38:16 -- nvmf/common.sh@46 -- # : 0 00:08:03.237 06:38:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.237 06:38:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.237 06:38:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.237 06:38:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.237 06:38:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.237 06:38:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.237 06:38:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.237 06:38:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.237 06:38:16 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:03.237 06:38:16 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:03.237 06:38:16 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:03.237 06:38:16 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:03.237 06:38:16 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:03.237 06:38:16 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:03.237 06:38:16 -- target/referrals.sh@37 -- # nvmftestinit 00:08:03.237 06:38:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:03.237 06:38:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.237 06:38:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:03.237 06:38:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:03.237 06:38:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:03.237 06:38:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.237 06:38:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.237 06:38:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.237 06:38:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:03.237 06:38:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:03.237 06:38:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:03.237 06:38:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:03.237 06:38:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:03.237 06:38:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:03.237 06:38:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.237 06:38:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.237 06:38:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:03.237 06:38:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:03.237 06:38:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.237 06:38:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.237 06:38:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.237 06:38:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.237 06:38:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.237 06:38:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.237 06:38:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.237 06:38:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.237 06:38:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:03.237 06:38:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:03.237 Cannot find device "nvmf_tgt_br" 00:08:03.237 06:38:17 -- nvmf/common.sh@154 -- # true 00:08:03.237 06:38:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.237 Cannot find device "nvmf_tgt_br2" 00:08:03.237 06:38:17 -- nvmf/common.sh@155 -- # true 00:08:03.238 06:38:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:03.238 06:38:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:03.238 Cannot find device "nvmf_tgt_br" 00:08:03.238 06:38:17 -- nvmf/common.sh@157 -- # true 00:08:03.238 06:38:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:03.238 Cannot find device "nvmf_tgt_br2" 00:08:03.238 06:38:17 -- nvmf/common.sh@158 -- # true 00:08:03.238 06:38:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:03.238 06:38:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:03.238 06:38:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.238 06:38:17 -- nvmf/common.sh@161 -- # true 00:08:03.238 06:38:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.238 06:38:17 -- nvmf/common.sh@162 -- # true 00:08:03.238 06:38:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.238 06:38:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.238 06:38:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.238 06:38:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.238 06:38:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.238 06:38:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.238 06:38:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.496 06:38:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:03.496 06:38:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:03.496 06:38:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:03.496 06:38:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:03.496 06:38:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:03.496 06:38:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:03.496 06:38:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.496 06:38:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.496 06:38:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.496 06:38:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:03.496 06:38:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:03.496 06:38:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.496 06:38:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.496 06:38:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.496 06:38:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.496 06:38:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.496 06:38:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:03.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:03.496 00:08:03.496 --- 10.0.0.2 ping statistics --- 00:08:03.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.496 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:03.497 06:38:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:03.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:03.497 00:08:03.497 --- 10.0.0.3 ping statistics --- 00:08:03.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.497 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:03.497 06:38:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:03.497 00:08:03.497 --- 10.0.0.1 ping statistics --- 00:08:03.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.497 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:03.497 06:38:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.497 06:38:17 -- nvmf/common.sh@421 -- # return 0 00:08:03.497 06:38:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.497 06:38:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.497 06:38:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:03.497 06:38:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:03.497 06:38:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.497 06:38:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:03.497 06:38:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:03.497 06:38:17 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:03.497 06:38:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:03.497 06:38:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.497 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:08:03.497 06:38:17 -- nvmf/common.sh@469 -- # nvmfpid=61838 00:08:03.497 06:38:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.497 06:38:17 -- nvmf/common.sh@470 -- # waitforlisten 61838 00:08:03.497 06:38:17 -- common/autotest_common.sh@829 -- # '[' -z 61838 ']' 00:08:03.497 06:38:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.497 06:38:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.497 06:38:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.497 06:38:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.497 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:08:03.497 [2024-12-14 06:38:17.438030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.497 [2024-12-14 06:38:17.438144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.755 [2024-12-14 06:38:17.573336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.755 [2024-12-14 06:38:17.672437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.755 [2024-12-14 06:38:17.672640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.755 [2024-12-14 06:38:17.672655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.755 [2024-12-14 06:38:17.672664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.755 [2024-12-14 06:38:17.672845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.755 [2024-12-14 06:38:17.672987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.755 [2024-12-14 06:38:17.673692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.755 [2024-12-14 06:38:17.673722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.689 06:38:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.689 06:38:18 -- common/autotest_common.sh@862 -- # return 0 00:08:04.689 06:38:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:04.689 06:38:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.689 06:38:18 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 [2024-12-14 06:38:18.542266] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 [2024-12-14 06:38:18.561948] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- target/referrals.sh@48 -- # jq length 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.689 06:38:18 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.689 06:38:18 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.689 06:38:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.689 06:38:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.689 06:38:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.689 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.689 06:38:18 -- target/referrals.sh@21 -- # sort 00:08:04.689 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.689 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.948 06:38:18 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.948 06:38:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.948 06:38:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # sort 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.948 06:38:18 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.948 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.948 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.948 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.948 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.948 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.948 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.948 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.948 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.948 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.948 06:38:18 -- target/referrals.sh@56 -- # jq length 00:08:04.948 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.948 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.948 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.948 06:38:18 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:04.948 06:38:18 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:04.948 06:38:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.948 06:38:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.948 06:38:18 -- target/referrals.sh@26 -- # sort 00:08:05.207 06:38:19 -- target/referrals.sh@26 -- # echo 00:08:05.207 06:38:19 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:05.207 06:38:19 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:05.207 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.207 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.207 06:38:19 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.207 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.207 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.207 06:38:19 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:05.207 06:38:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.207 06:38:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.207 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.207 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 06:38:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.207 06:38:19 -- target/referrals.sh@21 -- # sort 00:08:05.207 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.207 06:38:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:05.207 06:38:19 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.207 06:38:19 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:05.207 06:38:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.207 06:38:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.207 06:38:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.207 06:38:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.207 06:38:19 -- target/referrals.sh@26 -- # sort 00:08:05.465 06:38:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:05.465 06:38:19 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.465 06:38:19 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:05.465 06:38:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.465 06:38:19 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:05.465 06:38:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.465 06:38:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.465 06:38:19 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.465 06:38:19 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.465 06:38:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.465 06:38:19 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:05.465 06:38:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.465 06:38:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.724 06:38:19 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.724 06:38:19 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.724 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.724 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.724 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.724 06:38:19 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:05.724 06:38:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.724 06:38:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.724 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.724 06:38:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.724 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.724 06:38:19 -- target/referrals.sh@21 -- # sort 00:08:05.724 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.724 06:38:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:05.724 06:38:19 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.724 06:38:19 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:05.724 06:38:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.724 06:38:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.724 06:38:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.724 06:38:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.724 06:38:19 -- target/referrals.sh@26 -- # sort 00:08:05.724 06:38:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:05.724 06:38:19 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.724 06:38:19 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:05.724 06:38:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.724 06:38:19 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:05.724 06:38:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.724 06:38:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.982 06:38:19 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:05.982 06:38:19 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.982 06:38:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.982 06:38:19 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:05.982 06:38:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.982 06:38:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.983 06:38:19 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.983 06:38:19 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:05.983 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.983 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.983 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.983 06:38:19 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.983 06:38:19 -- target/referrals.sh@82 -- # jq length 00:08:05.983 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.983 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.983 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.983 06:38:19 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:05.983 06:38:19 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:05.983 06:38:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.983 06:38:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.983 06:38:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.983 06:38:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.983 06:38:19 -- target/referrals.sh@26 -- # sort 00:08:06.242 06:38:20 -- target/referrals.sh@26 -- # echo 00:08:06.242 06:38:20 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:06.242 06:38:20 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:06.242 06:38:20 -- target/referrals.sh@86 -- # nvmftestfini 00:08:06.242 06:38:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:06.242 06:38:20 -- nvmf/common.sh@116 -- # sync 00:08:06.242 06:38:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:06.242 06:38:20 -- nvmf/common.sh@119 -- # set +e 00:08:06.242 06:38:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.242 06:38:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:06.242 rmmod nvme_tcp 00:08:06.242 rmmod nvme_fabrics 00:08:06.242 rmmod nvme_keyring 00:08:06.242 06:38:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:06.242 06:38:20 -- nvmf/common.sh@123 -- # set -e 00:08:06.242 06:38:20 -- nvmf/common.sh@124 -- # return 0 00:08:06.242 06:38:20 -- nvmf/common.sh@477 -- # '[' -n 61838 ']' 00:08:06.242 06:38:20 -- nvmf/common.sh@478 -- # killprocess 61838 00:08:06.243 06:38:20 -- common/autotest_common.sh@936 -- # '[' -z 61838 ']' 00:08:06.243 06:38:20 -- common/autotest_common.sh@940 -- # kill -0 61838 00:08:06.243 06:38:20 -- common/autotest_common.sh@941 -- # uname 00:08:06.243 06:38:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.243 06:38:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61838 00:08:06.243 06:38:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.243 06:38:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.243 killing process with pid 61838 00:08:06.243 06:38:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61838' 00:08:06.243 06:38:20 -- common/autotest_common.sh@955 -- # kill 61838 00:08:06.243 06:38:20 -- common/autotest_common.sh@960 -- # wait 61838 00:08:06.837 06:38:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:06.837 06:38:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:06.837 06:38:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:06.837 06:38:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.837 06:38:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:06.837 06:38:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.837 06:38:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.837 06:38:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.837 06:38:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:06.837 00:08:06.837 real 0m3.852s 00:08:06.837 user 0m12.511s 00:08:06.837 sys 0m0.974s 00:08:06.837 06:38:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.837 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.837 ************************************ 00:08:06.837 END TEST nvmf_referrals 00:08:06.837 ************************************ 00:08:06.837 06:38:20 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.837 06:38:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:06.837 06:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.837 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.837 ************************************ 00:08:06.838 START TEST nvmf_connect_disconnect 00:08:06.838 ************************************ 00:08:06.838 06:38:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.838 * Looking for test storage... 00:08:06.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.838 06:38:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:06.838 06:38:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:06.838 06:38:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.185 06:38:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.185 06:38:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.185 06:38:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.185 06:38:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.185 06:38:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.185 06:38:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.185 06:38:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.185 06:38:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.185 06:38:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.185 06:38:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.185 06:38:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.185 06:38:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.185 06:38:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.185 06:38:20 -- scripts/common.sh@344 -- # : 1 00:08:07.185 06:38:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.185 06:38:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.185 06:38:20 -- scripts/common.sh@364 -- # decimal 1 00:08:07.185 06:38:20 -- scripts/common.sh@352 -- # local d=1 00:08:07.185 06:38:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.185 06:38:20 -- scripts/common.sh@354 -- # echo 1 00:08:07.185 06:38:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.185 06:38:20 -- scripts/common.sh@365 -- # decimal 2 00:08:07.185 06:38:20 -- scripts/common.sh@352 -- # local d=2 00:08:07.185 06:38:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.185 06:38:20 -- scripts/common.sh@354 -- # echo 2 00:08:07.185 06:38:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.185 06:38:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.185 06:38:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.185 06:38:20 -- scripts/common.sh@367 -- # return 0 00:08:07.185 06:38:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.185 06:38:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.185 --rc genhtml_branch_coverage=1 00:08:07.185 --rc genhtml_function_coverage=1 00:08:07.185 --rc genhtml_legend=1 00:08:07.185 --rc geninfo_all_blocks=1 00:08:07.185 --rc geninfo_unexecuted_blocks=1 00:08:07.185 00:08:07.185 ' 00:08:07.185 06:38:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.185 --rc genhtml_branch_coverage=1 00:08:07.185 --rc genhtml_function_coverage=1 00:08:07.185 --rc genhtml_legend=1 00:08:07.185 --rc geninfo_all_blocks=1 00:08:07.185 --rc geninfo_unexecuted_blocks=1 00:08:07.185 00:08:07.185 ' 00:08:07.185 06:38:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.185 --rc genhtml_branch_coverage=1 00:08:07.185 --rc genhtml_function_coverage=1 00:08:07.185 --rc genhtml_legend=1 00:08:07.185 --rc geninfo_all_blocks=1 00:08:07.185 --rc geninfo_unexecuted_blocks=1 00:08:07.185 00:08:07.185 ' 00:08:07.185 06:38:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.185 --rc genhtml_branch_coverage=1 00:08:07.185 --rc genhtml_function_coverage=1 00:08:07.185 --rc genhtml_legend=1 00:08:07.185 --rc geninfo_all_blocks=1 00:08:07.185 --rc geninfo_unexecuted_blocks=1 00:08:07.185 00:08:07.185 ' 00:08:07.185 06:38:20 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.185 06:38:20 -- nvmf/common.sh@7 -- # uname -s 00:08:07.185 06:38:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.185 06:38:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.185 06:38:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.185 06:38:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.185 06:38:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.185 06:38:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.185 06:38:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.185 06:38:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.185 06:38:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.185 06:38:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.185 06:38:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:07.185 06:38:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:08:07.185 06:38:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.185 06:38:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.185 06:38:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.185 06:38:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.185 06:38:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.185 06:38:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.185 06:38:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.185 06:38:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.185 06:38:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.185 06:38:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.185 06:38:20 -- paths/export.sh@5 -- # export PATH 00:08:07.185 06:38:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.185 06:38:20 -- nvmf/common.sh@46 -- # : 0 00:08:07.185 06:38:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.185 06:38:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.185 06:38:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.185 06:38:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.185 06:38:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.185 06:38:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.185 06:38:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.186 06:38:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.186 06:38:20 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.186 06:38:20 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.186 06:38:20 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:07.186 06:38:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.186 06:38:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.186 06:38:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.186 06:38:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.186 06:38:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.186 06:38:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.186 06:38:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.186 06:38:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.186 06:38:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:07.186 06:38:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:07.186 06:38:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:07.186 06:38:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:07.186 06:38:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:07.186 06:38:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:07.186 06:38:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.186 06:38:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.186 06:38:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.186 06:38:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:07.186 06:38:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.186 06:38:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.186 06:38:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.186 06:38:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.186 06:38:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.186 06:38:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.186 06:38:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.186 06:38:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.186 06:38:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:07.186 06:38:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:07.186 Cannot find device "nvmf_tgt_br" 00:08:07.186 06:38:20 -- nvmf/common.sh@154 -- # true 00:08:07.186 06:38:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.186 Cannot find device "nvmf_tgt_br2" 00:08:07.186 06:38:20 -- nvmf/common.sh@155 -- # true 00:08:07.186 06:38:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:07.186 06:38:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:07.186 Cannot find device "nvmf_tgt_br" 00:08:07.186 06:38:20 -- nvmf/common.sh@157 -- # true 00:08:07.186 06:38:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:07.186 Cannot find device "nvmf_tgt_br2" 00:08:07.186 06:38:20 -- nvmf/common.sh@158 -- # true 00:08:07.186 06:38:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:07.186 06:38:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:07.186 06:38:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.186 06:38:21 -- nvmf/common.sh@161 -- # true 00:08:07.186 06:38:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.186 06:38:21 -- nvmf/common.sh@162 -- # true 00:08:07.186 06:38:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.186 06:38:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.186 06:38:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.186 06:38:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.186 06:38:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.186 06:38:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.186 06:38:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.186 06:38:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.186 06:38:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.186 06:38:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:07.186 06:38:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:07.186 06:38:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:07.186 06:38:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:07.186 06:38:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.186 06:38:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.186 06:38:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.186 06:38:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:07.444 06:38:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:07.444 06:38:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.444 06:38:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.444 06:38:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.444 06:38:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.444 06:38:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.444 06:38:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:07.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:08:07.444 00:08:07.444 --- 10.0.0.2 ping statistics --- 00:08:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.444 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:07.444 06:38:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:07.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:08:07.444 00:08:07.444 --- 10.0.0.3 ping statistics --- 00:08:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.444 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:07.444 06:38:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:07.444 00:08:07.444 --- 10.0.0.1 ping statistics --- 00:08:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.444 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:07.444 06:38:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.444 06:38:21 -- nvmf/common.sh@421 -- # return 0 00:08:07.444 06:38:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.444 06:38:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.444 06:38:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.444 06:38:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.444 06:38:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.444 06:38:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.444 06:38:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.444 06:38:21 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:07.444 06:38:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.444 06:38:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.444 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:08:07.444 06:38:21 -- nvmf/common.sh@469 -- # nvmfpid=62154 00:08:07.444 06:38:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.444 06:38:21 -- nvmf/common.sh@470 -- # waitforlisten 62154 00:08:07.444 06:38:21 -- common/autotest_common.sh@829 -- # '[' -z 62154 ']' 00:08:07.444 06:38:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.444 06:38:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.444 06:38:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.444 06:38:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.444 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:08:07.444 [2024-12-14 06:38:21.323950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.444 [2024-12-14 06:38:21.324084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.702 [2024-12-14 06:38:21.459797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.702 [2024-12-14 06:38:21.554591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.702 [2024-12-14 06:38:21.554780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.703 [2024-12-14 06:38:21.554793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.703 [2024-12-14 06:38:21.554802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.703 [2024-12-14 06:38:21.554990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.703 [2024-12-14 06:38:21.555142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.703 [2024-12-14 06:38:21.555735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.703 [2024-12-14 06:38:21.555765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.637 06:38:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.637 06:38:22 -- common/autotest_common.sh@862 -- # return 0 00:08:08.637 06:38:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.637 06:38:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.637 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.637 06:38:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.637 06:38:22 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:08.638 06:38:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.638 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.638 [2024-12-14 06:38:22.446342] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.638 06:38:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:08.638 06:38:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.638 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.638 06:38:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.638 06:38:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.638 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.638 06:38:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.638 06:38:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.638 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.638 06:38:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.638 06:38:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.638 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.638 [2024-12-14 06:38:22.531435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.638 06:38:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:08.638 06:38:22 -- target/connect_disconnect.sh@34 -- # set +x 00:08:11.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.916 06:42:08 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:54.916 06:42:08 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:54.916 06:42:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:54.916 06:42:08 -- nvmf/common.sh@116 -- # sync 00:11:54.916 06:42:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:54.916 06:42:08 -- nvmf/common.sh@119 -- # set +e 00:11:54.916 06:42:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:54.916 06:42:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:54.916 rmmod nvme_tcp 00:11:54.916 rmmod nvme_fabrics 00:11:54.916 rmmod nvme_keyring 00:11:54.916 06:42:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:54.916 06:42:08 -- nvmf/common.sh@123 -- # set -e 00:11:54.916 06:42:08 -- nvmf/common.sh@124 -- # return 0 00:11:54.916 06:42:08 -- nvmf/common.sh@477 -- # '[' -n 62154 ']' 00:11:54.916 06:42:08 -- nvmf/common.sh@478 -- # killprocess 62154 00:11:54.916 06:42:08 -- common/autotest_common.sh@936 -- # '[' -z 62154 ']' 00:11:54.916 06:42:08 -- common/autotest_common.sh@940 -- # kill -0 62154 00:11:54.916 06:42:08 -- common/autotest_common.sh@941 -- # uname 00:11:54.916 06:42:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.916 06:42:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62154 00:11:54.916 killing process with pid 62154 00:11:54.916 06:42:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.916 06:42:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.916 06:42:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62154' 00:11:54.916 06:42:08 -- common/autotest_common.sh@955 -- # kill 62154 00:11:54.916 06:42:08 -- common/autotest_common.sh@960 -- # wait 62154 00:11:55.175 06:42:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:55.175 06:42:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:55.175 06:42:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:55.175 06:42:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.175 06:42:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:55.175 06:42:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.175 06:42:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.175 06:42:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.175 06:42:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:55.175 00:11:55.175 real 3m48.462s 00:11:55.175 user 14m47.219s 00:11:55.175 sys 0m25.767s 00:11:55.175 ************************************ 00:11:55.175 END TEST nvmf_connect_disconnect 00:11:55.175 ************************************ 00:11:55.175 06:42:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:55.175 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:11:55.434 06:42:09 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.434 06:42:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:55.434 06:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.434 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:11:55.434 ************************************ 00:11:55.434 START TEST nvmf_multitarget 00:11:55.434 ************************************ 00:11:55.434 06:42:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.434 * Looking for test storage... 00:11:55.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.434 06:42:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:55.434 06:42:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:55.434 06:42:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:55.434 06:42:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:55.434 06:42:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:55.434 06:42:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:55.434 06:42:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:55.434 06:42:09 -- scripts/common.sh@335 -- # IFS=.-: 00:11:55.434 06:42:09 -- scripts/common.sh@335 -- # read -ra ver1 00:11:55.434 06:42:09 -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.434 06:42:09 -- scripts/common.sh@336 -- # read -ra ver2 00:11:55.434 06:42:09 -- scripts/common.sh@337 -- # local 'op=<' 00:11:55.434 06:42:09 -- scripts/common.sh@339 -- # ver1_l=2 00:11:55.434 06:42:09 -- scripts/common.sh@340 -- # ver2_l=1 00:11:55.434 06:42:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:55.434 06:42:09 -- scripts/common.sh@343 -- # case "$op" in 00:11:55.434 06:42:09 -- scripts/common.sh@344 -- # : 1 00:11:55.434 06:42:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:55.434 06:42:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.434 06:42:09 -- scripts/common.sh@364 -- # decimal 1 00:11:55.434 06:42:09 -- scripts/common.sh@352 -- # local d=1 00:11:55.434 06:42:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.434 06:42:09 -- scripts/common.sh@354 -- # echo 1 00:11:55.434 06:42:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:55.434 06:42:09 -- scripts/common.sh@365 -- # decimal 2 00:11:55.434 06:42:09 -- scripts/common.sh@352 -- # local d=2 00:11:55.434 06:42:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.435 06:42:09 -- scripts/common.sh@354 -- # echo 2 00:11:55.435 06:42:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:55.435 06:42:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:55.435 06:42:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:55.435 06:42:09 -- scripts/common.sh@367 -- # return 0 00:11:55.435 06:42:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.435 06:42:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:55.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.435 --rc genhtml_branch_coverage=1 00:11:55.435 --rc genhtml_function_coverage=1 00:11:55.435 --rc genhtml_legend=1 00:11:55.435 --rc geninfo_all_blocks=1 00:11:55.435 --rc geninfo_unexecuted_blocks=1 00:11:55.435 00:11:55.435 ' 00:11:55.435 06:42:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:55.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.435 --rc genhtml_branch_coverage=1 00:11:55.435 --rc genhtml_function_coverage=1 00:11:55.435 --rc genhtml_legend=1 00:11:55.435 --rc geninfo_all_blocks=1 00:11:55.435 --rc geninfo_unexecuted_blocks=1 00:11:55.435 00:11:55.435 ' 00:11:55.435 06:42:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:55.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.435 --rc genhtml_branch_coverage=1 00:11:55.435 --rc genhtml_function_coverage=1 00:11:55.435 --rc genhtml_legend=1 00:11:55.435 --rc geninfo_all_blocks=1 00:11:55.435 --rc geninfo_unexecuted_blocks=1 00:11:55.435 00:11:55.435 ' 00:11:55.435 06:42:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:55.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.435 --rc genhtml_branch_coverage=1 00:11:55.435 --rc genhtml_function_coverage=1 00:11:55.435 --rc genhtml_legend=1 00:11:55.435 --rc geninfo_all_blocks=1 00:11:55.435 --rc geninfo_unexecuted_blocks=1 00:11:55.435 00:11:55.435 ' 00:11:55.435 06:42:09 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.435 06:42:09 -- nvmf/common.sh@7 -- # uname -s 00:11:55.435 06:42:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.435 06:42:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.435 06:42:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.435 06:42:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.435 06:42:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.435 06:42:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.435 06:42:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.435 06:42:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.435 06:42:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.435 06:42:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:11:55.435 06:42:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:11:55.435 06:42:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.435 06:42:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.435 06:42:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.435 06:42:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.435 06:42:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.435 06:42:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.435 06:42:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.435 06:42:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.435 06:42:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.435 06:42:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.435 06:42:09 -- paths/export.sh@5 -- # export PATH 00:11:55.435 06:42:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.435 06:42:09 -- nvmf/common.sh@46 -- # : 0 00:11:55.435 06:42:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:55.435 06:42:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:55.435 06:42:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:55.435 06:42:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.435 06:42:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.435 06:42:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:55.435 06:42:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:55.435 06:42:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:55.435 06:42:09 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:55.435 06:42:09 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:55.435 06:42:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:55.435 06:42:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.435 06:42:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:55.435 06:42:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:55.435 06:42:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:55.435 06:42:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.435 06:42:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.435 06:42:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.435 06:42:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:55.435 06:42:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:55.435 06:42:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.435 06:42:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.435 06:42:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.435 06:42:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:55.435 06:42:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.435 06:42:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.435 06:42:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.435 06:42:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.435 06:42:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.435 06:42:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.435 06:42:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.435 06:42:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.435 06:42:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:55.694 06:42:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:55.694 Cannot find device "nvmf_tgt_br" 00:11:55.694 06:42:09 -- nvmf/common.sh@154 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.694 Cannot find device "nvmf_tgt_br2" 00:11:55.694 06:42:09 -- nvmf/common.sh@155 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:55.694 06:42:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:55.694 Cannot find device "nvmf_tgt_br" 00:11:55.694 06:42:09 -- nvmf/common.sh@157 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:55.694 Cannot find device "nvmf_tgt_br2" 00:11:55.694 06:42:09 -- nvmf/common.sh@158 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:55.694 06:42:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:55.694 06:42:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.694 06:42:09 -- nvmf/common.sh@161 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.694 06:42:09 -- nvmf/common.sh@162 -- # true 00:11:55.694 06:42:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.694 06:42:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.694 06:42:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.694 06:42:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.694 06:42:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.694 06:42:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.694 06:42:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.694 06:42:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.694 06:42:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.694 06:42:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:55.694 06:42:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:55.694 06:42:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:55.694 06:42:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:55.953 06:42:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.953 06:42:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.953 06:42:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.953 06:42:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:55.953 06:42:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:55.954 06:42:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.954 06:42:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.954 06:42:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.954 06:42:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.954 06:42:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.954 06:42:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:55.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:55.954 00:11:55.954 --- 10.0.0.2 ping statistics --- 00:11:55.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.954 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:55.954 06:42:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:55.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:11:55.954 00:11:55.954 --- 10.0.0.3 ping statistics --- 00:11:55.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.954 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:55.954 06:42:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:55.954 00:11:55.954 --- 10.0.0.1 ping statistics --- 00:11:55.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.954 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:55.954 06:42:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.954 06:42:09 -- nvmf/common.sh@421 -- # return 0 00:11:55.954 06:42:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:55.954 06:42:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.954 06:42:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:55.954 06:42:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:55.954 06:42:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.954 06:42:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:55.954 06:42:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:55.954 06:42:09 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:55.954 06:42:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:55.954 06:42:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.954 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 06:42:09 -- nvmf/common.sh@469 -- # nvmfpid=65963 00:11:55.954 06:42:09 -- nvmf/common.sh@470 -- # waitforlisten 65963 00:11:55.954 06:42:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.954 06:42:09 -- common/autotest_common.sh@829 -- # '[' -z 65963 ']' 00:11:55.954 06:42:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.954 06:42:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.954 06:42:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.954 06:42:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.954 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 [2024-12-14 06:42:09.873782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:55.954 [2024-12-14 06:42:09.874279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.213 [2024-12-14 06:42:10.014428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.213 [2024-12-14 06:42:10.164838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:56.213 [2024-12-14 06:42:10.165323] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.213 [2024-12-14 06:42:10.165443] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.213 [2024-12-14 06:42:10.165634] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.213 [2024-12-14 06:42:10.165892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.213 [2024-12-14 06:42:10.166038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.213 [2024-12-14 06:42:10.166175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.213 [2024-12-14 06:42:10.166177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.148 06:42:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.148 06:42:10 -- common/autotest_common.sh@862 -- # return 0 00:11:57.148 06:42:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:57.148 06:42:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.148 06:42:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.148 06:42:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.148 06:42:10 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:57.148 06:42:10 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:57.149 06:42:10 -- target/multitarget.sh@21 -- # jq length 00:11:57.149 06:42:11 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:57.149 06:42:11 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:57.407 "nvmf_tgt_1" 00:11:57.407 06:42:11 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:57.407 "nvmf_tgt_2" 00:11:57.407 06:42:11 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:57.407 06:42:11 -- target/multitarget.sh@28 -- # jq length 00:11:57.666 06:42:11 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:57.666 06:42:11 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:57.666 true 00:11:57.666 06:42:11 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:57.924 true 00:11:57.924 06:42:11 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:57.924 06:42:11 -- target/multitarget.sh@35 -- # jq length 00:11:57.924 06:42:11 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:57.924 06:42:11 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:57.924 06:42:11 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:57.924 06:42:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:57.924 06:42:11 -- nvmf/common.sh@116 -- # sync 00:11:58.183 06:42:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:58.183 06:42:11 -- nvmf/common.sh@119 -- # set +e 00:11:58.183 06:42:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:58.183 06:42:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:58.183 rmmod nvme_tcp 00:11:58.183 rmmod nvme_fabrics 00:11:58.183 rmmod nvme_keyring 00:11:58.183 06:42:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:58.183 06:42:12 -- nvmf/common.sh@123 -- # set -e 00:11:58.183 06:42:12 -- nvmf/common.sh@124 -- # return 0 00:11:58.183 06:42:12 -- nvmf/common.sh@477 -- # '[' -n 65963 ']' 00:11:58.183 06:42:12 -- nvmf/common.sh@478 -- # killprocess 65963 00:11:58.183 06:42:12 -- common/autotest_common.sh@936 -- # '[' -z 65963 ']' 00:11:58.183 06:42:12 -- common/autotest_common.sh@940 -- # kill -0 65963 00:11:58.183 06:42:12 -- common/autotest_common.sh@941 -- # uname 00:11:58.183 06:42:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.183 06:42:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65963 00:11:58.183 06:42:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.183 06:42:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.183 killing process with pid 65963 00:11:58.183 06:42:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65963' 00:11:58.183 06:42:12 -- common/autotest_common.sh@955 -- # kill 65963 00:11:58.183 06:42:12 -- common/autotest_common.sh@960 -- # wait 65963 00:11:58.442 06:42:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:58.442 06:42:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:58.442 06:42:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:58.442 06:42:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.442 06:42:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:58.442 06:42:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.442 06:42:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.442 06:42:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.702 06:42:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:58.702 ************************************ 00:11:58.702 END TEST nvmf_multitarget 00:11:58.702 ************************************ 00:11:58.702 00:11:58.702 real 0m3.246s 00:11:58.702 user 0m10.094s 00:11:58.702 sys 0m0.805s 00:11:58.702 06:42:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.702 06:42:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.702 06:42:12 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:58.702 06:42:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.702 06:42:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.702 ************************************ 00:11:58.702 START TEST nvmf_rpc 00:11:58.702 ************************************ 00:11:58.702 06:42:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:58.702 * Looking for test storage... 00:11:58.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.702 06:42:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:58.702 06:42:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:58.702 06:42:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:58.702 06:42:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:58.702 06:42:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:58.702 06:42:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:58.702 06:42:12 -- scripts/common.sh@335 -- # IFS=.-: 00:11:58.702 06:42:12 -- scripts/common.sh@335 -- # read -ra ver1 00:11:58.702 06:42:12 -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.702 06:42:12 -- scripts/common.sh@336 -- # read -ra ver2 00:11:58.702 06:42:12 -- scripts/common.sh@337 -- # local 'op=<' 00:11:58.702 06:42:12 -- scripts/common.sh@339 -- # ver1_l=2 00:11:58.702 06:42:12 -- scripts/common.sh@340 -- # ver2_l=1 00:11:58.702 06:42:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:58.702 06:42:12 -- scripts/common.sh@343 -- # case "$op" in 00:11:58.702 06:42:12 -- scripts/common.sh@344 -- # : 1 00:11:58.702 06:42:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:58.702 06:42:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.702 06:42:12 -- scripts/common.sh@364 -- # decimal 1 00:11:58.702 06:42:12 -- scripts/common.sh@352 -- # local d=1 00:11:58.702 06:42:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.702 06:42:12 -- scripts/common.sh@354 -- # echo 1 00:11:58.702 06:42:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:58.702 06:42:12 -- scripts/common.sh@365 -- # decimal 2 00:11:58.702 06:42:12 -- scripts/common.sh@352 -- # local d=2 00:11:58.702 06:42:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.702 06:42:12 -- scripts/common.sh@354 -- # echo 2 00:11:58.702 06:42:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:58.702 06:42:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:58.702 06:42:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:58.702 06:42:12 -- scripts/common.sh@367 -- # return 0 00:11:58.702 06:42:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.702 --rc genhtml_branch_coverage=1 00:11:58.702 --rc genhtml_function_coverage=1 00:11:58.702 --rc genhtml_legend=1 00:11:58.702 --rc geninfo_all_blocks=1 00:11:58.702 --rc geninfo_unexecuted_blocks=1 00:11:58.702 00:11:58.702 ' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.702 --rc genhtml_branch_coverage=1 00:11:58.702 --rc genhtml_function_coverage=1 00:11:58.702 --rc genhtml_legend=1 00:11:58.702 --rc geninfo_all_blocks=1 00:11:58.702 --rc geninfo_unexecuted_blocks=1 00:11:58.702 00:11:58.702 ' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.702 --rc genhtml_branch_coverage=1 00:11:58.702 --rc genhtml_function_coverage=1 00:11:58.702 --rc genhtml_legend=1 00:11:58.702 --rc geninfo_all_blocks=1 00:11:58.702 --rc geninfo_unexecuted_blocks=1 00:11:58.702 00:11:58.702 ' 00:11:58.702 06:42:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.702 --rc genhtml_branch_coverage=1 00:11:58.702 --rc genhtml_function_coverage=1 00:11:58.702 --rc genhtml_legend=1 00:11:58.702 --rc geninfo_all_blocks=1 00:11:58.702 --rc geninfo_unexecuted_blocks=1 00:11:58.702 00:11:58.702 ' 00:11:58.702 06:42:12 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.702 06:42:12 -- nvmf/common.sh@7 -- # uname -s 00:11:58.702 06:42:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.702 06:42:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.702 06:42:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.702 06:42:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.702 06:42:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.702 06:42:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.702 06:42:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.702 06:42:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.702 06:42:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.702 06:42:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.974 06:42:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:11:58.974 06:42:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:11:58.974 06:42:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.974 06:42:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.974 06:42:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.974 06:42:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.974 06:42:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.974 06:42:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.974 06:42:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.974 06:42:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.974 06:42:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.974 06:42:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.974 06:42:12 -- paths/export.sh@5 -- # export PATH 00:11:58.974 06:42:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.974 06:42:12 -- nvmf/common.sh@46 -- # : 0 00:11:58.974 06:42:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:58.974 06:42:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:58.974 06:42:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:58.974 06:42:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.974 06:42:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.974 06:42:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:58.974 06:42:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:58.974 06:42:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:58.974 06:42:12 -- target/rpc.sh@11 -- # loops=5 00:11:58.974 06:42:12 -- target/rpc.sh@23 -- # nvmftestinit 00:11:58.974 06:42:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:58.974 06:42:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.974 06:42:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:58.974 06:42:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:58.974 06:42:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:58.974 06:42:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.974 06:42:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.974 06:42:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.974 06:42:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:58.974 06:42:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:58.974 06:42:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:58.974 06:42:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:58.974 06:42:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:58.974 06:42:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:58.974 06:42:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.974 06:42:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.974 06:42:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:58.974 06:42:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:58.974 06:42:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:58.974 06:42:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:58.974 06:42:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:58.974 06:42:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.974 06:42:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:58.974 06:42:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:58.974 06:42:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:58.974 06:42:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:58.974 06:42:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:58.974 06:42:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:58.974 Cannot find device "nvmf_tgt_br" 00:11:58.974 06:42:12 -- nvmf/common.sh@154 -- # true 00:11:58.974 06:42:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:58.974 Cannot find device "nvmf_tgt_br2" 00:11:58.974 06:42:12 -- nvmf/common.sh@155 -- # true 00:11:58.974 06:42:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:58.974 06:42:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:58.974 Cannot find device "nvmf_tgt_br" 00:11:58.974 06:42:12 -- nvmf/common.sh@157 -- # true 00:11:58.974 06:42:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:58.974 Cannot find device "nvmf_tgt_br2" 00:11:58.974 06:42:12 -- nvmf/common.sh@158 -- # true 00:11:58.974 06:42:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:58.974 06:42:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:58.974 06:42:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:58.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.975 06:42:12 -- nvmf/common.sh@161 -- # true 00:11:58.975 06:42:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:58.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.975 06:42:12 -- nvmf/common.sh@162 -- # true 00:11:58.975 06:42:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:58.975 06:42:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:58.975 06:42:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:58.975 06:42:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:58.975 06:42:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:58.975 06:42:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:58.975 06:42:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:58.975 06:42:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:58.975 06:42:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:58.975 06:42:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:58.975 06:42:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:58.975 06:42:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:58.975 06:42:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:58.975 06:42:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:58.975 06:42:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:58.975 06:42:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:58.975 06:42:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:58.975 06:42:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:59.247 06:42:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.247 06:42:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.247 06:42:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.247 06:42:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.247 06:42:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.247 06:42:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:59.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:59.247 00:11:59.247 --- 10.0.0.2 ping statistics --- 00:11:59.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.247 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:59.247 06:42:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:59.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:59.247 00:11:59.247 --- 10.0.0.3 ping statistics --- 00:11:59.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.247 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:59.247 06:42:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:59.247 00:11:59.247 --- 10.0.0.1 ping statistics --- 00:11:59.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.247 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:59.247 06:42:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.247 06:42:13 -- nvmf/common.sh@421 -- # return 0 00:11:59.247 06:42:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:59.247 06:42:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.247 06:42:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:59.247 06:42:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:59.247 06:42:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.247 06:42:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:59.247 06:42:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:59.247 06:42:13 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:59.247 06:42:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:59.247 06:42:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.247 06:42:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.247 06:42:13 -- nvmf/common.sh@469 -- # nvmfpid=66201 00:11:59.247 06:42:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.247 06:42:13 -- nvmf/common.sh@470 -- # waitforlisten 66201 00:11:59.247 06:42:13 -- common/autotest_common.sh@829 -- # '[' -z 66201 ']' 00:11:59.247 06:42:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.247 06:42:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.247 06:42:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.247 06:42:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.247 06:42:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.247 [2024-12-14 06:42:13.116993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.247 [2024-12-14 06:42:13.117134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.505 [2024-12-14 06:42:13.258746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.505 [2024-12-14 06:42:13.383539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:59.506 [2024-12-14 06:42:13.383721] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.506 [2024-12-14 06:42:13.383733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.506 [2024-12-14 06:42:13.383741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.506 [2024-12-14 06:42:13.384428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.506 [2024-12-14 06:42:13.384637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.506 [2024-12-14 06:42:13.384753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.506 [2024-12-14 06:42:13.384739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.072 06:42:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.072 06:42:14 -- common/autotest_common.sh@862 -- # return 0 00:12:00.072 06:42:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:00.072 06:42:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.072 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.331 06:42:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.331 06:42:14 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:00.331 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.331 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.331 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.331 06:42:14 -- target/rpc.sh@26 -- # stats='{ 00:12:00.331 "poll_groups": [ 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.331 "current_io_qpairs": 0, 00:12:00.331 "io_qpairs": 0, 00:12:00.331 "name": "nvmf_tgt_poll_group_0", 00:12:00.331 "pending_bdev_io": 0, 00:12:00.331 "transports": [] 00:12:00.331 }, 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.331 "current_io_qpairs": 0, 00:12:00.331 "io_qpairs": 0, 00:12:00.331 "name": "nvmf_tgt_poll_group_1", 00:12:00.331 "pending_bdev_io": 0, 00:12:00.331 "transports": [] 00:12:00.331 }, 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.331 "current_io_qpairs": 0, 00:12:00.331 "io_qpairs": 0, 00:12:00.331 "name": "nvmf_tgt_poll_group_2", 00:12:00.331 "pending_bdev_io": 0, 00:12:00.331 "transports": [] 00:12:00.331 }, 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.331 "current_io_qpairs": 0, 00:12:00.331 "io_qpairs": 0, 00:12:00.331 "name": "nvmf_tgt_poll_group_3", 00:12:00.331 "pending_bdev_io": 0, 00:12:00.331 "transports": [] 00:12:00.331 } 00:12:00.331 ], 00:12:00.331 "tick_rate": 2200000000 00:12:00.331 }' 00:12:00.331 06:42:14 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:00.331 06:42:14 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:00.331 06:42:14 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:00.331 06:42:14 -- target/rpc.sh@15 -- # wc -l 00:12:00.331 06:42:14 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:00.331 06:42:14 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:00.331 06:42:14 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:00.331 06:42:14 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.331 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.331 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.331 [2024-12-14 06:42:14.199683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.331 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.331 06:42:14 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:00.331 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.331 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.331 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.331 06:42:14 -- target/rpc.sh@33 -- # stats='{ 00:12:00.331 "poll_groups": [ 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.331 "current_io_qpairs": 0, 00:12:00.331 "io_qpairs": 0, 00:12:00.331 "name": "nvmf_tgt_poll_group_0", 00:12:00.331 "pending_bdev_io": 0, 00:12:00.331 "transports": [ 00:12:00.331 { 00:12:00.331 "trtype": "TCP" 00:12:00.331 } 00:12:00.331 ] 00:12:00.331 }, 00:12:00.331 { 00:12:00.331 "admin_qpairs": 0, 00:12:00.331 "completed_nvme_io": 0, 00:12:00.331 "current_admin_qpairs": 0, 00:12:00.332 "current_io_qpairs": 0, 00:12:00.332 "io_qpairs": 0, 00:12:00.332 "name": "nvmf_tgt_poll_group_1", 00:12:00.332 "pending_bdev_io": 0, 00:12:00.332 "transports": [ 00:12:00.332 { 00:12:00.332 "trtype": "TCP" 00:12:00.332 } 00:12:00.332 ] 00:12:00.332 }, 00:12:00.332 { 00:12:00.332 "admin_qpairs": 0, 00:12:00.332 "completed_nvme_io": 0, 00:12:00.332 "current_admin_qpairs": 0, 00:12:00.332 "current_io_qpairs": 0, 00:12:00.332 "io_qpairs": 0, 00:12:00.332 "name": "nvmf_tgt_poll_group_2", 00:12:00.332 "pending_bdev_io": 0, 00:12:00.332 "transports": [ 00:12:00.332 { 00:12:00.332 "trtype": "TCP" 00:12:00.332 } 00:12:00.332 ] 00:12:00.332 }, 00:12:00.332 { 00:12:00.332 "admin_qpairs": 0, 00:12:00.332 "completed_nvme_io": 0, 00:12:00.332 "current_admin_qpairs": 0, 00:12:00.332 "current_io_qpairs": 0, 00:12:00.332 "io_qpairs": 0, 00:12:00.332 "name": "nvmf_tgt_poll_group_3", 00:12:00.332 "pending_bdev_io": 0, 00:12:00.332 "transports": [ 00:12:00.332 { 00:12:00.332 "trtype": "TCP" 00:12:00.332 } 00:12:00.332 ] 00:12:00.332 } 00:12:00.332 ], 00:12:00.332 "tick_rate": 2200000000 00:12:00.332 }' 00:12:00.332 06:42:14 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:00.332 06:42:14 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:00.332 06:42:14 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:00.332 06:42:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:00.590 06:42:14 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:00.590 06:42:14 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:00.590 06:42:14 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:00.590 06:42:14 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:00.590 06:42:14 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:00.590 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.590 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.590 Malloc1 00:12:00.590 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.590 06:42:14 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.590 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.590 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.590 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.590 06:42:14 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.590 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.590 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.590 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.590 06:42:14 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:00.590 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.590 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.590 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.591 06:42:14 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.591 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.591 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.591 [2024-12-14 06:42:14.417540] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.591 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.591 06:42:14 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 -a 10.0.0.2 -s 4420 00:12:00.591 06:42:14 -- common/autotest_common.sh@650 -- # local es=0 00:12:00.591 06:42:14 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 -a 10.0.0.2 -s 4420 00:12:00.591 06:42:14 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:00.591 06:42:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.591 06:42:14 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:00.591 06:42:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.591 06:42:14 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:00.591 06:42:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.591 06:42:14 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:00.591 06:42:14 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:00.591 06:42:14 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 -a 10.0.0.2 -s 4420 00:12:00.591 [2024-12-14 06:42:14.441874] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986' 00:12:00.591 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:00.591 could not add new controller: failed to write to nvme-fabrics device 00:12:00.591 06:42:14 -- common/autotest_common.sh@653 -- # es=1 00:12:00.591 06:42:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.591 06:42:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.591 06:42:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.591 06:42:14 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:00.591 06:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.591 06:42:14 -- common/autotest_common.sh@10 -- # set +x 00:12:00.591 06:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.591 06:42:14 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.849 06:42:14 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.849 06:42:14 -- common/autotest_common.sh@1187 -- # local i=0 00:12:00.849 06:42:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.849 06:42:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:00.849 06:42:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:02.749 06:42:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:02.749 06:42:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:02.749 06:42:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.749 06:42:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:02.749 06:42:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.749 06:42:16 -- common/autotest_common.sh@1197 -- # return 0 00:12:02.749 06:42:16 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.007 06:42:16 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.007 06:42:16 -- common/autotest_common.sh@1208 -- # local i=0 00:12:03.007 06:42:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:03.007 06:42:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.007 06:42:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.007 06:42:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:03.007 06:42:16 -- common/autotest_common.sh@1220 -- # return 0 00:12:03.007 06:42:16 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:03.007 06:42:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.007 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:12:03.007 06:42:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.007 06:42:16 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.007 06:42:16 -- common/autotest_common.sh@650 -- # local es=0 00:12:03.007 06:42:16 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.007 06:42:16 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:03.007 06:42:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.007 06:42:16 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:03.007 06:42:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.007 06:42:16 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:03.007 06:42:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.007 06:42:16 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:03.007 06:42:16 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:03.008 06:42:16 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.008 [2024-12-14 06:42:16.852922] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986' 00:12:03.008 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:03.008 could not add new controller: failed to write to nvme-fabrics device 00:12:03.008 06:42:16 -- common/autotest_common.sh@653 -- # es=1 00:12:03.008 06:42:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.008 06:42:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:03.008 06:42:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.008 06:42:16 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:03.008 06:42:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.008 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:12:03.008 06:42:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.008 06:42:16 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.266 06:42:17 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.266 06:42:17 -- common/autotest_common.sh@1187 -- # local i=0 00:12:03.266 06:42:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.266 06:42:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:03.266 06:42:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:05.237 06:42:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:05.237 06:42:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:05.237 06:42:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.237 06:42:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:05.237 06:42:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.237 06:42:19 -- common/autotest_common.sh@1197 -- # return 0 00:12:05.237 06:42:19 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.237 06:42:19 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.237 06:42:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:05.237 06:42:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:05.237 06:42:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.237 06:42:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:05.237 06:42:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.237 06:42:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:05.237 06:42:19 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.237 06:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.237 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.237 06:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.237 06:42:19 -- target/rpc.sh@81 -- # seq 1 5 00:12:05.237 06:42:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.237 06:42:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.237 06:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.237 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.237 06:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.237 06:42:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.237 06:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.237 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.237 [2024-12-14 06:42:19.156112] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.238 06:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.238 06:42:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.238 06:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.238 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 06:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.238 06:42:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.238 06:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.238 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 06:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.238 06:42:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.508 06:42:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.508 06:42:19 -- common/autotest_common.sh@1187 -- # local i=0 00:12:05.508 06:42:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.508 06:42:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:05.508 06:42:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:07.410 06:42:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:07.410 06:42:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:07.410 06:42:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.410 06:42:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:07.410 06:42:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.410 06:42:21 -- common/autotest_common.sh@1197 -- # return 0 00:12:07.410 06:42:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.668 06:42:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.668 06:42:21 -- common/autotest_common.sh@1208 -- # local i=0 00:12:07.668 06:42:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:07.668 06:42:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.668 06:42:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.668 06:42:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:07.668 06:42:21 -- common/autotest_common.sh@1220 -- # return 0 00:12:07.668 06:42:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.668 06:42:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 [2024-12-14 06:42:21.467615] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.668 06:42:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.668 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 06:42:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.668 06:42:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.927 06:42:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.927 06:42:21 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.927 06:42:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.927 06:42:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.927 06:42:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:09.832 06:42:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:09.832 06:42:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:09.832 06:42:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.832 06:42:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:09.832 06:42:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.832 06:42:23 -- common/autotest_common.sh@1197 -- # return 0 00:12:09.832 06:42:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.832 06:42:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.832 06:42:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:09.832 06:42:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:09.832 06:42:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.832 06:42:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:09.832 06:42:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.832 06:42:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:09.832 06:42:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.832 06:42:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.832 06:42:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.832 06:42:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.832 06:42:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 [2024-12-14 06:42:23.790889] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.832 06:42:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.832 06:42:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.832 06:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.832 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 06:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.833 06:42:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.091 06:42:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.091 06:42:23 -- common/autotest_common.sh@1187 -- # local i=0 00:12:10.091 06:42:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.091 06:42:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:10.091 06:42:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:12.619 06:42:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:12.619 06:42:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.619 06:42:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:12.619 06:42:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:12.619 06:42:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.619 06:42:26 -- common/autotest_common.sh@1197 -- # return 0 00:12:12.619 06:42:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.619 06:42:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.619 06:42:26 -- common/autotest_common.sh@1208 -- # local i=0 00:12:12.619 06:42:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:12.619 06:42:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.619 06:42:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:12.619 06:42:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.619 06:42:26 -- common/autotest_common.sh@1220 -- # return 0 00:12:12.619 06:42:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.619 06:42:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 [2024-12-14 06:42:26.210661] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.619 06:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.619 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.619 06:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.619 06:42:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.619 06:42:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.619 06:42:26 -- common/autotest_common.sh@1187 -- # local i=0 00:12:12.619 06:42:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.619 06:42:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:12.619 06:42:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:14.522 06:42:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:14.522 06:42:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:14.522 06:42:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.522 06:42:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:14.522 06:42:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.522 06:42:28 -- common/autotest_common.sh@1197 -- # return 0 00:12:14.522 06:42:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.522 06:42:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.522 06:42:28 -- common/autotest_common.sh@1208 -- # local i=0 00:12:14.522 06:42:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:14.522 06:42:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.522 06:42:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:14.522 06:42:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.522 06:42:28 -- common/autotest_common.sh@1220 -- # return 0 00:12:14.522 06:42:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.522 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.522 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.522 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.522 06:42:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.522 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.522 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.780 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.780 06:42:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.780 06:42:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.780 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.780 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.780 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.780 06:42:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.780 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.780 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.780 [2024-12-14 06:42:28.529916] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.780 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.780 06:42:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.780 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.780 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.780 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.780 06:42:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.780 06:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.780 06:42:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.780 06:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.780 06:42:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.780 06:42:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.780 06:42:28 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.780 06:42:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.780 06:42:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.780 06:42:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:17.311 06:42:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:17.311 06:42:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:17.311 06:42:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:17.311 06:42:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.311 06:42:30 -- common/autotest_common.sh@1197 -- # return 0 00:12:17.311 06:42:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.311 06:42:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@1208 -- # local i=0 00:12:17.311 06:42:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:17.311 06:42:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:17.311 06:42:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@1220 -- # return 0 00:12:17.311 06:42:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@99 -- # seq 1 5 00:12:17.311 06:42:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.311 06:42:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 [2024-12-14 06:42:30.936880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 06:42:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.311 06:42:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.311 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.312 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 [2024-12-14 06:42:30.984856] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.312 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.312 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.312 06:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 [2024-12-14 06:42:31.036915] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.312 06:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 [2024-12-14 06:42:31.084946] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.312 06:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 [2024-12-14 06:42:31.145054] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:17.312 06:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.312 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 06:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.312 06:42:31 -- target/rpc.sh@110 -- # stats='{ 00:12:17.312 "poll_groups": [ 00:12:17.312 { 00:12:17.312 "admin_qpairs": 2, 00:12:17.312 "completed_nvme_io": 67, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "io_qpairs": 16, 00:12:17.312 "name": "nvmf_tgt_poll_group_0", 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "transports": [ 00:12:17.312 { 00:12:17.312 "trtype": "TCP" 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "admin_qpairs": 3, 00:12:17.312 "completed_nvme_io": 67, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "io_qpairs": 17, 00:12:17.312 "name": "nvmf_tgt_poll_group_1", 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "transports": [ 00:12:17.312 { 00:12:17.312 "trtype": "TCP" 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "admin_qpairs": 1, 00:12:17.312 "completed_nvme_io": 118, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "io_qpairs": 19, 00:12:17.312 "name": "nvmf_tgt_poll_group_2", 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "transports": [ 00:12:17.312 { 00:12:17.312 "trtype": "TCP" 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "admin_qpairs": 1, 00:12:17.312 "completed_nvme_io": 168, 00:12:17.312 "current_admin_qpairs": 0, 00:12:17.312 "current_io_qpairs": 0, 00:12:17.312 "io_qpairs": 18, 00:12:17.312 "name": "nvmf_tgt_poll_group_3", 00:12:17.312 "pending_bdev_io": 0, 00:12:17.312 "transports": [ 00:12:17.312 { 00:12:17.312 "trtype": "TCP" 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 } 00:12:17.313 ], 00:12:17.313 "tick_rate": 2200000000 00:12:17.313 }' 00:12:17.313 06:42:31 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:17.313 06:42:31 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:17.313 06:42:31 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:17.313 06:42:31 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.313 06:42:31 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:17.313 06:42:31 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:17.313 06:42:31 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:17.313 06:42:31 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.313 06:42:31 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:17.571 06:42:31 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:17.571 06:42:31 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:17.571 06:42:31 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:17.572 06:42:31 -- target/rpc.sh@123 -- # nvmftestfini 00:12:17.572 06:42:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:17.572 06:42:31 -- nvmf/common.sh@116 -- # sync 00:12:17.572 06:42:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:17.572 06:42:31 -- nvmf/common.sh@119 -- # set +e 00:12:17.572 06:42:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:17.572 06:42:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:17.572 rmmod nvme_tcp 00:12:17.572 rmmod nvme_fabrics 00:12:17.572 rmmod nvme_keyring 00:12:17.572 06:42:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:17.572 06:42:31 -- nvmf/common.sh@123 -- # set -e 00:12:17.572 06:42:31 -- nvmf/common.sh@124 -- # return 0 00:12:17.572 06:42:31 -- nvmf/common.sh@477 -- # '[' -n 66201 ']' 00:12:17.572 06:42:31 -- nvmf/common.sh@478 -- # killprocess 66201 00:12:17.572 06:42:31 -- common/autotest_common.sh@936 -- # '[' -z 66201 ']' 00:12:17.572 06:42:31 -- common/autotest_common.sh@940 -- # kill -0 66201 00:12:17.572 06:42:31 -- common/autotest_common.sh@941 -- # uname 00:12:17.572 06:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.572 06:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66201 00:12:17.572 06:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:17.572 06:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:17.572 06:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66201' 00:12:17.572 killing process with pid 66201 00:12:17.572 06:42:31 -- common/autotest_common.sh@955 -- # kill 66201 00:12:17.572 06:42:31 -- common/autotest_common.sh@960 -- # wait 66201 00:12:17.830 06:42:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:17.830 06:42:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:17.830 06:42:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:17.830 06:42:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.830 06:42:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:17.830 06:42:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.830 06:42:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.830 06:42:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.089 06:42:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.089 00:12:18.089 real 0m19.343s 00:12:18.089 user 1m12.256s 00:12:18.089 sys 0m2.711s 00:12:18.089 06:42:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.089 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:18.089 ************************************ 00:12:18.089 END TEST nvmf_rpc 00:12:18.089 ************************************ 00:12:18.089 06:42:31 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.089 06:42:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.089 06:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.089 06:42:31 -- common/autotest_common.sh@10 -- # set +x 00:12:18.089 ************************************ 00:12:18.089 START TEST nvmf_invalid 00:12:18.089 ************************************ 00:12:18.089 06:42:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.089 * Looking for test storage... 00:12:18.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.089 06:42:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:18.089 06:42:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:18.089 06:42:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:18.089 06:42:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:18.089 06:42:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:18.089 06:42:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.089 06:42:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.089 06:42:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.089 06:42:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.089 06:42:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.089 06:42:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.089 06:42:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:18.089 06:42:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:18.089 06:42:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:18.089 06:42:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.089 06:42:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.089 06:42:32 -- scripts/common.sh@344 -- # : 1 00:12:18.089 06:42:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.089 06:42:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.089 06:42:32 -- scripts/common.sh@364 -- # decimal 1 00:12:18.089 06:42:32 -- scripts/common.sh@352 -- # local d=1 00:12:18.089 06:42:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.089 06:42:32 -- scripts/common.sh@354 -- # echo 1 00:12:18.089 06:42:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.089 06:42:32 -- scripts/common.sh@365 -- # decimal 2 00:12:18.089 06:42:32 -- scripts/common.sh@352 -- # local d=2 00:12:18.089 06:42:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.089 06:42:32 -- scripts/common.sh@354 -- # echo 2 00:12:18.089 06:42:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:18.089 06:42:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.089 06:42:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:18.089 06:42:32 -- scripts/common.sh@367 -- # return 0 00:12:18.089 06:42:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.089 06:42:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:18.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.089 --rc genhtml_branch_coverage=1 00:12:18.089 --rc genhtml_function_coverage=1 00:12:18.089 --rc genhtml_legend=1 00:12:18.089 --rc geninfo_all_blocks=1 00:12:18.089 --rc geninfo_unexecuted_blocks=1 00:12:18.089 00:12:18.089 ' 00:12:18.089 06:42:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:18.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.089 --rc genhtml_branch_coverage=1 00:12:18.089 --rc genhtml_function_coverage=1 00:12:18.089 --rc genhtml_legend=1 00:12:18.089 --rc geninfo_all_blocks=1 00:12:18.089 --rc geninfo_unexecuted_blocks=1 00:12:18.089 00:12:18.089 ' 00:12:18.089 06:42:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:18.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.089 --rc genhtml_branch_coverage=1 00:12:18.089 --rc genhtml_function_coverage=1 00:12:18.089 --rc genhtml_legend=1 00:12:18.089 --rc geninfo_all_blocks=1 00:12:18.089 --rc geninfo_unexecuted_blocks=1 00:12:18.089 00:12:18.089 ' 00:12:18.089 06:42:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:18.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.089 --rc genhtml_branch_coverage=1 00:12:18.089 --rc genhtml_function_coverage=1 00:12:18.089 --rc genhtml_legend=1 00:12:18.089 --rc geninfo_all_blocks=1 00:12:18.089 --rc geninfo_unexecuted_blocks=1 00:12:18.089 00:12:18.089 ' 00:12:18.089 06:42:32 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.089 06:42:32 -- nvmf/common.sh@7 -- # uname -s 00:12:18.089 06:42:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.089 06:42:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.089 06:42:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.089 06:42:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.089 06:42:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.089 06:42:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.089 06:42:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.089 06:42:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.089 06:42:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.089 06:42:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.089 06:42:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:18.089 06:42:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:18.089 06:42:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.089 06:42:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.089 06:42:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.089 06:42:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.089 06:42:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.089 06:42:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.089 06:42:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.089 06:42:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.089 06:42:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.089 06:42:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.089 06:42:32 -- paths/export.sh@5 -- # export PATH 00:12:18.090 06:42:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 06:42:32 -- nvmf/common.sh@46 -- # : 0 00:12:18.090 06:42:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:18.090 06:42:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:18.090 06:42:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:18.090 06:42:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.090 06:42:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.090 06:42:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:18.090 06:42:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:18.090 06:42:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:18.090 06:42:32 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:18.090 06:42:32 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.090 06:42:32 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:18.090 06:42:32 -- target/invalid.sh@14 -- # target=foobar 00:12:18.090 06:42:32 -- target/invalid.sh@16 -- # RANDOM=0 00:12:18.090 06:42:32 -- target/invalid.sh@34 -- # nvmftestinit 00:12:18.090 06:42:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:18.090 06:42:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.090 06:42:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:18.090 06:42:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:18.090 06:42:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:18.090 06:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.090 06:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.090 06:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.348 06:42:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:18.348 06:42:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:18.348 06:42:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:18.349 06:42:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:18.349 06:42:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:18.349 06:42:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:18.349 06:42:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.349 06:42:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.349 06:42:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.349 06:42:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:18.349 06:42:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.349 06:42:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.349 06:42:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.349 06:42:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.349 06:42:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.349 06:42:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.349 06:42:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.349 06:42:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.349 06:42:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:18.349 06:42:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:18.349 Cannot find device "nvmf_tgt_br" 00:12:18.349 06:42:32 -- nvmf/common.sh@154 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.349 Cannot find device "nvmf_tgt_br2" 00:12:18.349 06:42:32 -- nvmf/common.sh@155 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:18.349 06:42:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:18.349 Cannot find device "nvmf_tgt_br" 00:12:18.349 06:42:32 -- nvmf/common.sh@157 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:18.349 Cannot find device "nvmf_tgt_br2" 00:12:18.349 06:42:32 -- nvmf/common.sh@158 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:18.349 06:42:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:18.349 06:42:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.349 06:42:32 -- nvmf/common.sh@161 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.349 06:42:32 -- nvmf/common.sh@162 -- # true 00:12:18.349 06:42:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.349 06:42:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.349 06:42:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.349 06:42:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.349 06:42:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.349 06:42:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.349 06:42:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.349 06:42:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:18.349 06:42:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:18.349 06:42:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:18.349 06:42:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:18.349 06:42:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:18.349 06:42:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:18.349 06:42:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.607 06:42:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.607 06:42:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.607 06:42:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:18.607 06:42:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:18.607 06:42:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.607 06:42:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.607 06:42:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.607 06:42:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.607 06:42:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.607 06:42:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:18.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:18.608 00:12:18.608 --- 10.0.0.2 ping statistics --- 00:12:18.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.608 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:18.608 06:42:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:18.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:18.608 00:12:18.608 --- 10.0.0.3 ping statistics --- 00:12:18.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.608 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:18.608 06:42:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:18.608 00:12:18.608 --- 10.0.0.1 ping statistics --- 00:12:18.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.608 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:18.608 06:42:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.608 06:42:32 -- nvmf/common.sh@421 -- # return 0 00:12:18.608 06:42:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:18.608 06:42:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.608 06:42:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:18.608 06:42:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:18.608 06:42:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.608 06:42:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:18.608 06:42:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:18.608 06:42:32 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:18.608 06:42:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:18.608 06:42:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.608 06:42:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.608 06:42:32 -- nvmf/common.sh@469 -- # nvmfpid=66725 00:12:18.608 06:42:32 -- nvmf/common.sh@470 -- # waitforlisten 66725 00:12:18.608 06:42:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.608 06:42:32 -- common/autotest_common.sh@829 -- # '[' -z 66725 ']' 00:12:18.608 06:42:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.608 06:42:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.608 06:42:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.608 06:42:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.608 06:42:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.608 [2024-12-14 06:42:32.500083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:18.608 [2024-12-14 06:42:32.500174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.866 [2024-12-14 06:42:32.635599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.866 [2024-12-14 06:42:32.746849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:18.866 [2024-12-14 06:42:32.747340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.866 [2024-12-14 06:42:32.747468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.866 [2024-12-14 06:42:32.747578] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.866 [2024-12-14 06:42:32.747769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.866 [2024-12-14 06:42:32.747894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.866 [2024-12-14 06:42:32.748006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.866 [2024-12-14 06:42:32.748007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.799 06:42:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.799 06:42:33 -- common/autotest_common.sh@862 -- # return 0 00:12:19.799 06:42:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:19.799 06:42:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.799 06:42:33 -- common/autotest_common.sh@10 -- # set +x 00:12:19.799 06:42:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.799 06:42:33 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:19.799 06:42:33 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20866 00:12:20.057 [2024-12-14 06:42:33.791315] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:20.057 06:42:33 -- target/invalid.sh@40 -- # out='2024/12/14 06:42:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20866 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:20.057 request: 00:12:20.057 { 00:12:20.057 "method": "nvmf_create_subsystem", 00:12:20.057 "params": { 00:12:20.057 "nqn": "nqn.2016-06.io.spdk:cnode20866", 00:12:20.057 "tgt_name": "foobar" 00:12:20.057 } 00:12:20.057 } 00:12:20.057 Got JSON-RPC error response 00:12:20.057 GoRPCClient: error on JSON-RPC call' 00:12:20.057 06:42:33 -- target/invalid.sh@41 -- # [[ 2024/12/14 06:42:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20866 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:20.057 request: 00:12:20.057 { 00:12:20.057 "method": "nvmf_create_subsystem", 00:12:20.057 "params": { 00:12:20.057 "nqn": "nqn.2016-06.io.spdk:cnode20866", 00:12:20.057 "tgt_name": "foobar" 00:12:20.057 } 00:12:20.057 } 00:12:20.057 Got JSON-RPC error response 00:12:20.057 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:20.057 06:42:33 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:20.057 06:42:33 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13687 00:12:20.315 [2024-12-14 06:42:34.083565] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13687: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:20.315 06:42:34 -- target/invalid.sh@45 -- # out='2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13687 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:20.315 request: 00:12:20.315 { 00:12:20.315 "method": "nvmf_create_subsystem", 00:12:20.315 "params": { 00:12:20.315 "nqn": "nqn.2016-06.io.spdk:cnode13687", 00:12:20.315 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:20.315 } 00:12:20.315 } 00:12:20.315 Got JSON-RPC error response 00:12:20.315 GoRPCClient: error on JSON-RPC call' 00:12:20.315 06:42:34 -- target/invalid.sh@46 -- # [[ 2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13687 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:20.315 request: 00:12:20.315 { 00:12:20.315 "method": "nvmf_create_subsystem", 00:12:20.315 "params": { 00:12:20.315 "nqn": "nqn.2016-06.io.spdk:cnode13687", 00:12:20.315 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:20.315 } 00:12:20.315 } 00:12:20.315 Got JSON-RPC error response 00:12:20.315 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:20.315 06:42:34 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:20.315 06:42:34 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3212 00:12:20.574 [2024-12-14 06:42:34.379790] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3212: invalid model number 'SPDK_Controller' 00:12:20.574 06:42:34 -- target/invalid.sh@50 -- # out='2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3212], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:20.574 request: 00:12:20.574 { 00:12:20.574 "method": "nvmf_create_subsystem", 00:12:20.574 "params": { 00:12:20.574 "nqn": "nqn.2016-06.io.spdk:cnode3212", 00:12:20.574 "model_number": "SPDK_Controller\u001f" 00:12:20.574 } 00:12:20.574 } 00:12:20.574 Got JSON-RPC error response 00:12:20.574 GoRPCClient: error on JSON-RPC call' 00:12:20.574 06:42:34 -- target/invalid.sh@51 -- # [[ 2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3212], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:20.574 request: 00:12:20.574 { 00:12:20.574 "method": "nvmf_create_subsystem", 00:12:20.574 "params": { 00:12:20.574 "nqn": "nqn.2016-06.io.spdk:cnode3212", 00:12:20.574 "model_number": "SPDK_Controller\u001f" 00:12:20.574 } 00:12:20.574 } 00:12:20.574 Got JSON-RPC error response 00:12:20.574 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:20.574 06:42:34 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:20.574 06:42:34 -- target/invalid.sh@19 -- # local length=21 ll 00:12:20.575 06:42:34 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:20.575 06:42:34 -- target/invalid.sh@21 -- # local chars 00:12:20.575 06:42:34 -- target/invalid.sh@22 -- # local string 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 74 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=J 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 116 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=t 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 84 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=T 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 107 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=k 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 41 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=')' 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 78 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=N 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 51 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=3 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 82 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=R 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 41 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=')' 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 121 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=y 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 42 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+='*' 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 93 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=']' 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 33 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+='!' 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 54 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=6 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 114 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=r 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 69 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=E 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 122 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=z 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 90 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=Z 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 99 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=c 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 44 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=, 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # printf %x 66 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:20.575 06:42:34 -- target/invalid.sh@25 -- # string+=B 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.575 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.575 06:42:34 -- target/invalid.sh@28 -- # [[ J == \- ]] 00:12:20.575 06:42:34 -- target/invalid.sh@31 -- # echo 'JtTk)N3R)y*]!6rEzZc,B' 00:12:20.575 06:42:34 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'JtTk)N3R)y*]!6rEzZc,B' nqn.2016-06.io.spdk:cnode15963 00:12:20.834 [2024-12-14 06:42:34.756142] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15963: invalid serial number 'JtTk)N3R)y*]!6rEzZc,B' 00:12:20.834 06:42:34 -- target/invalid.sh@54 -- # out='2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15963 serial_number:JtTk)N3R)y*]!6rEzZc,B], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN JtTk)N3R)y*]!6rEzZc,B 00:12:20.834 request: 00:12:20.834 { 00:12:20.834 "method": "nvmf_create_subsystem", 00:12:20.834 "params": { 00:12:20.834 "nqn": "nqn.2016-06.io.spdk:cnode15963", 00:12:20.834 "serial_number": "JtTk)N3R)y*]!6rEzZc,B" 00:12:20.834 } 00:12:20.834 } 00:12:20.834 Got JSON-RPC error response 00:12:20.834 GoRPCClient: error on JSON-RPC call' 00:12:20.834 06:42:34 -- target/invalid.sh@55 -- # [[ 2024/12/14 06:42:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15963 serial_number:JtTk)N3R)y*]!6rEzZc,B], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN JtTk)N3R)y*]!6rEzZc,B 00:12:20.834 request: 00:12:20.834 { 00:12:20.834 "method": "nvmf_create_subsystem", 00:12:20.834 "params": { 00:12:20.834 "nqn": "nqn.2016-06.io.spdk:cnode15963", 00:12:20.834 "serial_number": "JtTk)N3R)y*]!6rEzZc,B" 00:12:20.834 } 00:12:20.834 } 00:12:20.834 Got JSON-RPC error response 00:12:20.834 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:20.834 06:42:34 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:20.834 06:42:34 -- target/invalid.sh@19 -- # local length=41 ll 00:12:20.834 06:42:34 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:20.834 06:42:34 -- target/invalid.sh@21 -- # local chars 00:12:20.834 06:42:34 -- target/invalid.sh@22 -- # local string 00:12:20.834 06:42:34 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:20.834 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # printf %x 39 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # string+=\' 00:12:20.834 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.834 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # printf %x 44 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:20.834 06:42:34 -- target/invalid.sh@25 -- # string+=, 00:12:20.834 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 40 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # string+='(' 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 45 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # string+=- 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 116 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # string+=t 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 89 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # string+=Y 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 80 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # string+=P 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.835 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.835 06:42:34 -- target/invalid.sh@25 -- # printf %x 35 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+='#' 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 48 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=0 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 84 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=T 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 90 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=Z 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 92 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+='\' 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 58 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=: 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 116 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=t 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 110 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=n 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 57 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=9 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 93 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=']' 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 117 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=u 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 48 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=0 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 43 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=+ 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 95 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=_ 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 76 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=L 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 100 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=d 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 91 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+='[' 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 77 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=M 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 74 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=J 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 63 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+='?' 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 105 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=i 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # printf %x 97 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:21.094 06:42:34 -- target/invalid.sh@25 -- # string+=a 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.094 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 107 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=k 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 61 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+== 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 119 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=w 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 63 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+='?' 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 49 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=1 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 60 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+='<' 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 78 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=N 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 73 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=I 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 75 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=K 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 79 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=O 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 81 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=Q 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # printf %x 108 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:21.095 06:42:34 -- target/invalid.sh@25 -- # string+=l 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.095 06:42:34 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.095 06:42:34 -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:12:21.095 06:42:34 -- target/invalid.sh@31 -- # echo ''\'',(-tYP#0TZ\:tn9]u0+_Ld[MJ?iak=w?1 /dev/null' 00:12:24.250 06:42:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.250 06:42:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:24.250 00:12:24.250 real 0m6.070s 00:12:24.250 user 0m23.953s 00:12:24.250 sys 0m1.275s 00:12:24.250 06:42:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.250 06:42:37 -- common/autotest_common.sh@10 -- # set +x 00:12:24.250 ************************************ 00:12:24.250 END TEST nvmf_invalid 00:12:24.250 ************************************ 00:12:24.250 06:42:38 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:24.250 06:42:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.250 06:42:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.250 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.250 ************************************ 00:12:24.250 START TEST nvmf_abort 00:12:24.250 ************************************ 00:12:24.250 06:42:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:24.250 * Looking for test storage... 00:12:24.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.250 06:42:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:24.250 06:42:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:24.250 06:42:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:24.250 06:42:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:24.250 06:42:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:24.250 06:42:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:24.250 06:42:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:24.250 06:42:38 -- scripts/common.sh@335 -- # IFS=.-: 00:12:24.250 06:42:38 -- scripts/common.sh@335 -- # read -ra ver1 00:12:24.250 06:42:38 -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.250 06:42:38 -- scripts/common.sh@336 -- # read -ra ver2 00:12:24.250 06:42:38 -- scripts/common.sh@337 -- # local 'op=<' 00:12:24.250 06:42:38 -- scripts/common.sh@339 -- # ver1_l=2 00:12:24.250 06:42:38 -- scripts/common.sh@340 -- # ver2_l=1 00:12:24.250 06:42:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:24.250 06:42:38 -- scripts/common.sh@343 -- # case "$op" in 00:12:24.250 06:42:38 -- scripts/common.sh@344 -- # : 1 00:12:24.250 06:42:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:24.250 06:42:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.250 06:42:38 -- scripts/common.sh@364 -- # decimal 1 00:12:24.250 06:42:38 -- scripts/common.sh@352 -- # local d=1 00:12:24.250 06:42:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.250 06:42:38 -- scripts/common.sh@354 -- # echo 1 00:12:24.250 06:42:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:24.250 06:42:38 -- scripts/common.sh@365 -- # decimal 2 00:12:24.250 06:42:38 -- scripts/common.sh@352 -- # local d=2 00:12:24.250 06:42:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.250 06:42:38 -- scripts/common.sh@354 -- # echo 2 00:12:24.250 06:42:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:24.250 06:42:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:24.250 06:42:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:24.250 06:42:38 -- scripts/common.sh@367 -- # return 0 00:12:24.250 06:42:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.250 06:42:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:24.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.250 --rc genhtml_branch_coverage=1 00:12:24.250 --rc genhtml_function_coverage=1 00:12:24.250 --rc genhtml_legend=1 00:12:24.250 --rc geninfo_all_blocks=1 00:12:24.250 --rc geninfo_unexecuted_blocks=1 00:12:24.250 00:12:24.250 ' 00:12:24.250 06:42:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:24.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.250 --rc genhtml_branch_coverage=1 00:12:24.250 --rc genhtml_function_coverage=1 00:12:24.250 --rc genhtml_legend=1 00:12:24.250 --rc geninfo_all_blocks=1 00:12:24.250 --rc geninfo_unexecuted_blocks=1 00:12:24.250 00:12:24.251 ' 00:12:24.251 06:42:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:24.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.251 --rc genhtml_branch_coverage=1 00:12:24.251 --rc genhtml_function_coverage=1 00:12:24.251 --rc genhtml_legend=1 00:12:24.251 --rc geninfo_all_blocks=1 00:12:24.251 --rc geninfo_unexecuted_blocks=1 00:12:24.251 00:12:24.251 ' 00:12:24.251 06:42:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:24.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.251 --rc genhtml_branch_coverage=1 00:12:24.251 --rc genhtml_function_coverage=1 00:12:24.251 --rc genhtml_legend=1 00:12:24.251 --rc geninfo_all_blocks=1 00:12:24.251 --rc geninfo_unexecuted_blocks=1 00:12:24.251 00:12:24.251 ' 00:12:24.251 06:42:38 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.251 06:42:38 -- nvmf/common.sh@7 -- # uname -s 00:12:24.251 06:42:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.251 06:42:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.251 06:42:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.251 06:42:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.251 06:42:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.251 06:42:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.251 06:42:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.251 06:42:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.251 06:42:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.251 06:42:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:24.251 06:42:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:24.251 06:42:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.251 06:42:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.251 06:42:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.251 06:42:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.251 06:42:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.251 06:42:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.251 06:42:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.251 06:42:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.251 06:42:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.251 06:42:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.251 06:42:38 -- paths/export.sh@5 -- # export PATH 00:12:24.251 06:42:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.251 06:42:38 -- nvmf/common.sh@46 -- # : 0 00:12:24.251 06:42:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:24.251 06:42:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:24.251 06:42:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:24.251 06:42:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.251 06:42:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.251 06:42:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:24.251 06:42:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:24.251 06:42:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:24.251 06:42:38 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.251 06:42:38 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:24.251 06:42:38 -- target/abort.sh@14 -- # nvmftestinit 00:12:24.251 06:42:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:24.251 06:42:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.251 06:42:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:24.251 06:42:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:24.251 06:42:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:24.251 06:42:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.251 06:42:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.251 06:42:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.251 06:42:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:24.251 06:42:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:24.251 06:42:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.251 06:42:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.251 06:42:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.251 06:42:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:24.251 06:42:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.251 06:42:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.251 06:42:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.251 06:42:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.251 06:42:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.251 06:42:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.251 06:42:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.251 06:42:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.251 06:42:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:24.509 06:42:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:24.509 Cannot find device "nvmf_tgt_br" 00:12:24.509 06:42:38 -- nvmf/common.sh@154 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.509 Cannot find device "nvmf_tgt_br2" 00:12:24.509 06:42:38 -- nvmf/common.sh@155 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:24.509 06:42:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:24.509 Cannot find device "nvmf_tgt_br" 00:12:24.509 06:42:38 -- nvmf/common.sh@157 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:24.509 Cannot find device "nvmf_tgt_br2" 00:12:24.509 06:42:38 -- nvmf/common.sh@158 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:24.509 06:42:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:24.509 06:42:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.509 06:42:38 -- nvmf/common.sh@161 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.509 06:42:38 -- nvmf/common.sh@162 -- # true 00:12:24.509 06:42:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.509 06:42:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.509 06:42:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.509 06:42:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.509 06:42:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.509 06:42:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.509 06:42:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.509 06:42:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.509 06:42:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.509 06:42:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:24.509 06:42:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:24.509 06:42:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:24.509 06:42:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:24.509 06:42:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.509 06:42:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.509 06:42:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.509 06:42:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:24.509 06:42:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:24.509 06:42:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.509 06:42:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.509 06:42:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.767 06:42:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.767 06:42:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.767 06:42:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:24.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:24.767 00:12:24.767 --- 10.0.0.2 ping statistics --- 00:12:24.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.767 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:24.767 06:42:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:24.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:24.767 00:12:24.767 --- 10.0.0.3 ping statistics --- 00:12:24.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.767 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:24.767 06:42:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:24.767 00:12:24.767 --- 10.0.0.1 ping statistics --- 00:12:24.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.767 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:24.767 06:42:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.767 06:42:38 -- nvmf/common.sh@421 -- # return 0 00:12:24.767 06:42:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:24.767 06:42:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.767 06:42:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:24.767 06:42:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:24.767 06:42:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.767 06:42:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:24.767 06:42:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:24.767 06:42:38 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:24.767 06:42:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.767 06:42:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.767 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.767 06:42:38 -- nvmf/common.sh@469 -- # nvmfpid=67236 00:12:24.767 06:42:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:24.768 06:42:38 -- nvmf/common.sh@470 -- # waitforlisten 67236 00:12:24.768 06:42:38 -- common/autotest_common.sh@829 -- # '[' -z 67236 ']' 00:12:24.768 06:42:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.768 06:42:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.768 06:42:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.768 06:42:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.768 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.768 [2024-12-14 06:42:38.619407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:24.768 [2024-12-14 06:42:38.619527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.025 [2024-12-14 06:42:38.760883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.025 [2024-12-14 06:42:38.932082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.025 [2024-12-14 06:42:38.932294] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.025 [2024-12-14 06:42:38.932312] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.025 [2024-12-14 06:42:38.932324] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.025 [2024-12-14 06:42:38.932453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.025 [2024-12-14 06:42:38.933059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.025 [2024-12-14 06:42:38.933072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.958 06:42:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.958 06:42:39 -- common/autotest_common.sh@862 -- # return 0 00:12:25.958 06:42:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:25.958 06:42:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 06:42:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.958 06:42:39 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 [2024-12-14 06:42:39.731046] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 Malloc0 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 Delay0 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 [2024-12-14 06:42:39.811836] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.958 06:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.958 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 06:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.958 06:42:39 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:26.216 [2024-12-14 06:42:39.982258] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:28.113 Initializing NVMe Controllers 00:12:28.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:28.113 controller IO queue size 128 less than required 00:12:28.113 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:28.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:28.113 Initialization complete. Launching workers. 00:12:28.113 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 30982 00:12:28.113 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31047, failed to submit 62 00:12:28.113 success 30982, unsuccess 65, failed 0 00:12:28.113 06:42:42 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:28.113 06:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.113 06:42:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 06:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.113 06:42:42 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:28.113 06:42:42 -- target/abort.sh@38 -- # nvmftestfini 00:12:28.113 06:42:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:28.113 06:42:42 -- nvmf/common.sh@116 -- # sync 00:12:28.113 06:42:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:28.113 06:42:42 -- nvmf/common.sh@119 -- # set +e 00:12:28.113 06:42:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:28.113 06:42:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:28.113 rmmod nvme_tcp 00:12:28.113 rmmod nvme_fabrics 00:12:28.113 rmmod nvme_keyring 00:12:28.371 06:42:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:28.371 06:42:42 -- nvmf/common.sh@123 -- # set -e 00:12:28.371 06:42:42 -- nvmf/common.sh@124 -- # return 0 00:12:28.371 06:42:42 -- nvmf/common.sh@477 -- # '[' -n 67236 ']' 00:12:28.371 06:42:42 -- nvmf/common.sh@478 -- # killprocess 67236 00:12:28.371 06:42:42 -- common/autotest_common.sh@936 -- # '[' -z 67236 ']' 00:12:28.371 06:42:42 -- common/autotest_common.sh@940 -- # kill -0 67236 00:12:28.371 06:42:42 -- common/autotest_common.sh@941 -- # uname 00:12:28.371 06:42:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.371 06:42:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67236 00:12:28.371 killing process with pid 67236 00:12:28.371 06:42:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:28.371 06:42:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:28.371 06:42:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67236' 00:12:28.371 06:42:42 -- common/autotest_common.sh@955 -- # kill 67236 00:12:28.371 06:42:42 -- common/autotest_common.sh@960 -- # wait 67236 00:12:28.938 06:42:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:28.938 06:42:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:28.938 06:42:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:28.938 06:42:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.938 06:42:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:28.938 06:42:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.938 06:42:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.938 06:42:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.938 06:42:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:28.938 ************************************ 00:12:28.938 END TEST nvmf_abort 00:12:28.938 00:12:28.938 real 0m4.644s 00:12:28.938 user 0m12.937s 00:12:28.938 sys 0m1.102s 00:12:28.938 06:42:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.938 06:42:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 ************************************ 00:12:28.938 06:42:42 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:28.938 06:42:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:28.938 06:42:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.938 06:42:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 ************************************ 00:12:28.938 START TEST nvmf_ns_hotplug_stress 00:12:28.938 ************************************ 00:12:28.938 06:42:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:28.938 * Looking for test storage... 00:12:28.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.938 06:42:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:28.938 06:42:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:28.938 06:42:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:28.938 06:42:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:28.938 06:42:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:28.938 06:42:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:28.938 06:42:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:28.938 06:42:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:28.938 06:42:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:28.938 06:42:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.938 06:42:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:28.938 06:42:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:28.938 06:42:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:28.938 06:42:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:28.938 06:42:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:28.938 06:42:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:28.938 06:42:42 -- scripts/common.sh@344 -- # : 1 00:12:28.938 06:42:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:28.938 06:42:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.938 06:42:42 -- scripts/common.sh@364 -- # decimal 1 00:12:28.938 06:42:42 -- scripts/common.sh@352 -- # local d=1 00:12:28.939 06:42:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.939 06:42:42 -- scripts/common.sh@354 -- # echo 1 00:12:28.939 06:42:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:28.939 06:42:42 -- scripts/common.sh@365 -- # decimal 2 00:12:28.939 06:42:42 -- scripts/common.sh@352 -- # local d=2 00:12:28.939 06:42:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.939 06:42:42 -- scripts/common.sh@354 -- # echo 2 00:12:28.939 06:42:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:28.939 06:42:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:28.939 06:42:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:28.939 06:42:42 -- scripts/common.sh@367 -- # return 0 00:12:28.939 06:42:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.939 06:42:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.939 --rc genhtml_branch_coverage=1 00:12:28.939 --rc genhtml_function_coverage=1 00:12:28.939 --rc genhtml_legend=1 00:12:28.939 --rc geninfo_all_blocks=1 00:12:28.939 --rc geninfo_unexecuted_blocks=1 00:12:28.939 00:12:28.939 ' 00:12:28.939 06:42:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.939 --rc genhtml_branch_coverage=1 00:12:28.939 --rc genhtml_function_coverage=1 00:12:28.939 --rc genhtml_legend=1 00:12:28.939 --rc geninfo_all_blocks=1 00:12:28.939 --rc geninfo_unexecuted_blocks=1 00:12:28.939 00:12:28.939 ' 00:12:28.939 06:42:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.939 --rc genhtml_branch_coverage=1 00:12:28.939 --rc genhtml_function_coverage=1 00:12:28.939 --rc genhtml_legend=1 00:12:28.939 --rc geninfo_all_blocks=1 00:12:28.939 --rc geninfo_unexecuted_blocks=1 00:12:28.939 00:12:28.939 ' 00:12:28.939 06:42:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.939 --rc genhtml_branch_coverage=1 00:12:28.939 --rc genhtml_function_coverage=1 00:12:28.939 --rc genhtml_legend=1 00:12:28.939 --rc geninfo_all_blocks=1 00:12:28.939 --rc geninfo_unexecuted_blocks=1 00:12:28.939 00:12:28.939 ' 00:12:28.939 06:42:42 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:28.939 06:42:42 -- nvmf/common.sh@7 -- # uname -s 00:12:28.939 06:42:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.939 06:42:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.939 06:42:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.939 06:42:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.939 06:42:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.939 06:42:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.939 06:42:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.939 06:42:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.939 06:42:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.939 06:42:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:28.939 06:42:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:12:28.939 06:42:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.939 06:42:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.939 06:42:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:28.939 06:42:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.939 06:42:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.939 06:42:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.939 06:42:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.939 06:42:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.939 06:42:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.939 06:42:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.939 06:42:42 -- paths/export.sh@5 -- # export PATH 00:12:28.939 06:42:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.939 06:42:42 -- nvmf/common.sh@46 -- # : 0 00:12:28.939 06:42:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:28.939 06:42:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:28.939 06:42:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:28.939 06:42:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.939 06:42:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.939 06:42:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:28.939 06:42:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:28.939 06:42:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:28.939 06:42:42 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.939 06:42:42 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:28.939 06:42:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:28.939 06:42:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.939 06:42:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:28.939 06:42:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:28.939 06:42:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:28.939 06:42:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.939 06:42:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.939 06:42:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.939 06:42:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:28.939 06:42:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:28.939 06:42:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.939 06:42:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.939 06:42:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:28.939 06:42:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:28.939 06:42:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:28.939 06:42:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:28.939 06:42:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:28.939 06:42:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.939 06:42:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:28.939 06:42:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:28.939 06:42:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:28.939 06:42:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:28.939 06:42:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:28.939 06:42:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:28.939 Cannot find device "nvmf_tgt_br" 00:12:28.939 06:42:42 -- nvmf/common.sh@154 -- # true 00:12:28.939 06:42:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.198 Cannot find device "nvmf_tgt_br2" 00:12:29.198 06:42:42 -- nvmf/common.sh@155 -- # true 00:12:29.198 06:42:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:29.198 06:42:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:29.198 Cannot find device "nvmf_tgt_br" 00:12:29.198 06:42:42 -- nvmf/common.sh@157 -- # true 00:12:29.198 06:42:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:29.198 Cannot find device "nvmf_tgt_br2" 00:12:29.198 06:42:42 -- nvmf/common.sh@158 -- # true 00:12:29.198 06:42:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:29.198 06:42:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:29.198 06:42:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.199 06:42:43 -- nvmf/common.sh@161 -- # true 00:12:29.199 06:42:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.199 06:42:43 -- nvmf/common.sh@162 -- # true 00:12:29.199 06:42:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.199 06:42:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.199 06:42:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.199 06:42:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.199 06:42:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.199 06:42:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:29.199 06:42:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:29.199 06:42:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:29.199 06:42:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:29.199 06:42:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:29.199 06:42:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:29.199 06:42:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:29.199 06:42:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:29.199 06:42:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:29.199 06:42:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:29.199 06:42:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:29.199 06:42:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:29.199 06:42:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:29.199 06:42:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:29.199 06:42:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:29.199 06:42:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:29.199 06:42:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:29.199 06:42:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:29.199 06:42:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:29.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:12:29.199 00:12:29.199 --- 10.0.0.2 ping statistics --- 00:12:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.199 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:29.199 06:42:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:29.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:29.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:12:29.199 00:12:29.199 --- 10.0.0.3 ping statistics --- 00:12:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.199 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:29.199 06:42:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:29.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:29.199 00:12:29.199 --- 10.0.0.1 ping statistics --- 00:12:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.199 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:29.199 06:42:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.199 06:42:43 -- nvmf/common.sh@421 -- # return 0 00:12:29.199 06:42:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:29.199 06:42:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.199 06:42:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:29.199 06:42:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:29.199 06:42:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.199 06:42:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:29.199 06:42:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:29.457 06:42:43 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:29.457 06:42:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:29.457 06:42:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.457 06:42:43 -- common/autotest_common.sh@10 -- # set +x 00:12:29.457 06:42:43 -- nvmf/common.sh@469 -- # nvmfpid=67518 00:12:29.457 06:42:43 -- nvmf/common.sh@470 -- # waitforlisten 67518 00:12:29.457 06:42:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:29.457 06:42:43 -- common/autotest_common.sh@829 -- # '[' -z 67518 ']' 00:12:29.457 06:42:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.457 06:42:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.457 06:42:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.457 06:42:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.457 06:42:43 -- common/autotest_common.sh@10 -- # set +x 00:12:29.457 [2024-12-14 06:42:43.246374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:29.457 [2024-12-14 06:42:43.246454] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.457 [2024-12-14 06:42:43.379190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.715 [2024-12-14 06:42:43.478816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:29.715 [2024-12-14 06:42:43.479263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.715 [2024-12-14 06:42:43.479431] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.716 [2024-12-14 06:42:43.479655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.716 [2024-12-14 06:42:43.479911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.716 [2024-12-14 06:42:43.479995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.716 [2024-12-14 06:42:43.479994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.293 06:42:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.293 06:42:44 -- common/autotest_common.sh@862 -- # return 0 00:12:30.293 06:42:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:30.293 06:42:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.293 06:42:44 -- common/autotest_common.sh@10 -- # set +x 00:12:30.293 06:42:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.293 06:42:44 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:30.293 06:42:44 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:30.567 [2024-12-14 06:42:44.475048] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.567 06:42:44 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:30.827 06:42:44 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.085 [2024-12-14 06:42:45.019748] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.085 06:42:45 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.342 06:42:45 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:31.600 Malloc0 00:12:31.600 06:42:45 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:31.859 Delay0 00:12:31.859 06:42:45 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.116 06:42:45 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:32.373 NULL1 00:12:32.373 06:42:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:32.631 06:42:46 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67653 00:12:32.631 06:42:46 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:32.631 06:42:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:32.631 06:42:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.004 Read completed with error (sct=0, sc=11) 00:12:34.004 06:42:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.262 06:42:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:34.262 06:42:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:34.521 true 00:12:34.521 06:42:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:34.521 06:42:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.455 06:42:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.455 06:42:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:35.455 06:42:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:35.713 true 00:12:35.713 06:42:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:35.713 06:42:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.971 06:42:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.228 06:42:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:36.228 06:42:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:36.486 true 00:12:36.486 06:42:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:36.486 06:42:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.422 06:42:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.422 06:42:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:37.422 06:42:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:37.681 true 00:12:37.681 06:42:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:37.681 06:42:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.940 06:42:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.198 06:42:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:38.198 06:42:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:38.456 true 00:12:38.456 06:42:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:38.456 06:42:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.391 06:42:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.391 06:42:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:39.391 06:42:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:39.649 true 00:12:39.649 06:42:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:39.649 06:42:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.919 06:42:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.508 06:42:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:40.508 06:42:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:40.508 true 00:12:40.508 06:42:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:40.508 06:42:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.766 06:42:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.024 06:42:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:41.024 06:42:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:41.282 true 00:12:41.282 06:42:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:41.282 06:42:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.217 06:42:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.476 06:42:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:42.476 06:42:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:42.735 true 00:12:42.735 06:42:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:42.735 06:42:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.993 06:42:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.251 06:42:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:43.251 06:42:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:43.509 true 00:12:43.509 06:42:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:43.509 06:42:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.768 06:42:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.027 06:42:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:44.027 06:42:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:44.285 true 00:12:44.285 06:42:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:44.285 06:42:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.222 06:42:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.481 06:42:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:45.481 06:42:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:45.739 true 00:12:45.739 06:42:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:45.739 06:42:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.307 06:42:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.307 06:43:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:46.307 06:43:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:46.566 true 00:12:46.566 06:43:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:46.566 06:43:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.824 06:43:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.081 06:43:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:47.082 06:43:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:47.646 true 00:12:47.646 06:43:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:47.646 06:43:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.211 06:43:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.776 06:43:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:48.776 06:43:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:49.034 true 00:12:49.034 06:43:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:49.034 06:43:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.407 06:43:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.666 06:43:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:50.666 06:43:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:50.666 true 00:12:50.666 06:43:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:50.666 06:43:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.600 06:43:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.858 06:43:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:51.858 06:43:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:52.117 true 00:12:52.117 06:43:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:52.117 06:43:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.375 06:43:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.634 06:43:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:52.634 06:43:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:52.634 true 00:12:52.634 06:43:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:52.634 06:43:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.599 06:43:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.858 06:43:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:53.858 06:43:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:54.116 true 00:12:54.116 06:43:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:54.116 06:43:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.374 06:43:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.633 06:43:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:54.633 06:43:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:54.633 true 00:12:54.891 06:43:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:54.891 06:43:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.825 06:43:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.825 06:43:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:55.825 06:43:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:56.083 true 00:12:56.083 06:43:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:56.083 06:43:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.342 06:43:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.599 06:43:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:56.599 06:43:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:56.857 true 00:12:56.857 06:43:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:56.857 06:43:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.792 06:43:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.792 06:43:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:57.792 06:43:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:58.050 true 00:12:58.050 06:43:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:58.050 06:43:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.309 06:43:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.567 06:43:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:58.567 06:43:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:58.826 true 00:12:58.826 06:43:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:58.826 06:43:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.085 06:43:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.384 06:43:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:59.384 06:43:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:59.952 true 00:12:59.952 06:43:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:12:59.952 06:43:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.519 06:43:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.778 06:43:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:00.778 06:43:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:01.036 true 00:13:01.036 06:43:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:13:01.036 06:43:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.296 06:43:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.863 06:43:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:01.863 06:43:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:01.863 true 00:13:01.863 06:43:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:13:01.863 06:43:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.121 06:43:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.380 06:43:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:02.380 06:43:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:02.639 true 00:13:02.639 06:43:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:13:02.639 06:43:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.574 Initializing NVMe Controllers 00:13:03.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.574 Controller IO queue size 128, less than required. 00:13:03.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.574 Controller IO queue size 128, less than required. 00:13:03.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:03.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:03.574 Initialization complete. Launching workers. 00:13:03.574 ======================================================== 00:13:03.574 Latency(us) 00:13:03.574 Device Information : IOPS MiB/s Average min max 00:13:03.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 544.07 0.27 119256.10 2707.68 1100847.23 00:13:03.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11720.80 5.72 10919.57 2746.08 618210.53 00:13:03.574 ======================================================== 00:13:03.574 Total : 12264.87 5.99 15725.35 2707.68 1100847.23 00:13:03.574 00:13:03.574 06:43:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.834 06:43:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:03.834 06:43:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:04.093 true 00:13:04.093 06:43:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67653 00:13:04.093 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67653) - No such process 00:13:04.093 06:43:17 -- target/ns_hotplug_stress.sh@53 -- # wait 67653 00:13:04.093 06:43:17 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.417 06:43:18 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.675 06:43:18 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:04.675 06:43:18 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:04.675 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:04.675 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.675 06:43:18 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:04.675 null0 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:04.934 null1 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:04.934 06:43:18 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:05.192 null2 00:13:05.192 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.192 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.192 06:43:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:05.451 null3 00:13:05.451 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.451 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.451 06:43:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:05.710 null4 00:13:05.710 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.710 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.710 06:43:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:05.968 null5 00:13:05.968 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.968 06:43:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.968 06:43:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:06.227 null6 00:13:06.227 06:43:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.227 06:43:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.227 06:43:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:06.486 null7 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@66 -- # wait 68687 68689 68690 68692 68693 68696 68699 68700 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.486 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.745 06:43:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.005 06:43:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.265 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.523 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.524 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.782 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.041 06:43:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.041 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.041 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.041 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.300 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.559 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.559 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.559 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.560 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.819 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.078 06:43:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.078 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.337 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.596 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.854 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.113 06:43:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.113 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.372 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.630 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.889 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.890 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.890 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.148 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.148 06:43:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.148 06:43:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.148 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.148 06:43:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.148 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.405 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.662 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.921 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.179 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.179 06:43:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.179 06:43:25 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:12.179 06:43:25 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:12.179 06:43:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:12.179 06:43:25 -- nvmf/common.sh@116 -- # sync 00:13:12.179 06:43:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:12.179 06:43:25 -- nvmf/common.sh@119 -- # set +e 00:13:12.179 06:43:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:12.179 06:43:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:12.179 rmmod nvme_tcp 00:13:12.179 rmmod nvme_fabrics 00:13:12.179 rmmod nvme_keyring 00:13:12.179 06:43:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:12.179 06:43:25 -- nvmf/common.sh@123 -- # set -e 00:13:12.179 06:43:25 -- nvmf/common.sh@124 -- # return 0 00:13:12.179 06:43:25 -- nvmf/common.sh@477 -- # '[' -n 67518 ']' 00:13:12.179 06:43:25 -- nvmf/common.sh@478 -- # killprocess 67518 00:13:12.179 06:43:25 -- common/autotest_common.sh@936 -- # '[' -z 67518 ']' 00:13:12.179 06:43:25 -- common/autotest_common.sh@940 -- # kill -0 67518 00:13:12.179 06:43:25 -- common/autotest_common.sh@941 -- # uname 00:13:12.179 06:43:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.179 06:43:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67518 00:13:12.179 06:43:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:12.179 06:43:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:12.179 killing process with pid 67518 00:13:12.179 06:43:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67518' 00:13:12.179 06:43:26 -- common/autotest_common.sh@955 -- # kill 67518 00:13:12.179 06:43:26 -- common/autotest_common.sh@960 -- # wait 67518 00:13:12.438 06:43:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:12.438 06:43:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:12.438 06:43:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:12.438 06:43:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.438 06:43:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:12.438 06:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.438 06:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.438 06:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.438 06:43:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:12.438 ************************************ 00:13:12.438 END TEST nvmf_ns_hotplug_stress 00:13:12.438 ************************************ 00:13:12.438 00:13:12.438 real 0m43.666s 00:13:12.438 user 3m29.541s 00:13:12.438 sys 0m12.859s 00:13:12.438 06:43:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:12.438 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.438 06:43:26 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.438 06:43:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:12.438 06:43:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.438 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.698 ************************************ 00:13:12.698 START TEST nvmf_connect_stress 00:13:12.698 ************************************ 00:13:12.698 06:43:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.698 * Looking for test storage... 00:13:12.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.698 06:43:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:12.698 06:43:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:12.698 06:43:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:12.698 06:43:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:12.698 06:43:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:12.698 06:43:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:12.698 06:43:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:12.698 06:43:26 -- scripts/common.sh@335 -- # IFS=.-: 00:13:12.698 06:43:26 -- scripts/common.sh@335 -- # read -ra ver1 00:13:12.698 06:43:26 -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.698 06:43:26 -- scripts/common.sh@336 -- # read -ra ver2 00:13:12.698 06:43:26 -- scripts/common.sh@337 -- # local 'op=<' 00:13:12.698 06:43:26 -- scripts/common.sh@339 -- # ver1_l=2 00:13:12.698 06:43:26 -- scripts/common.sh@340 -- # ver2_l=1 00:13:12.698 06:43:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:12.698 06:43:26 -- scripts/common.sh@343 -- # case "$op" in 00:13:12.698 06:43:26 -- scripts/common.sh@344 -- # : 1 00:13:12.698 06:43:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:12.698 06:43:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.698 06:43:26 -- scripts/common.sh@364 -- # decimal 1 00:13:12.698 06:43:26 -- scripts/common.sh@352 -- # local d=1 00:13:12.698 06:43:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.698 06:43:26 -- scripts/common.sh@354 -- # echo 1 00:13:12.698 06:43:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:12.698 06:43:26 -- scripts/common.sh@365 -- # decimal 2 00:13:12.698 06:43:26 -- scripts/common.sh@352 -- # local d=2 00:13:12.698 06:43:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.698 06:43:26 -- scripts/common.sh@354 -- # echo 2 00:13:12.698 06:43:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:12.698 06:43:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:12.698 06:43:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:12.698 06:43:26 -- scripts/common.sh@367 -- # return 0 00:13:12.698 06:43:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.698 06:43:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:12.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.698 --rc genhtml_branch_coverage=1 00:13:12.698 --rc genhtml_function_coverage=1 00:13:12.698 --rc genhtml_legend=1 00:13:12.698 --rc geninfo_all_blocks=1 00:13:12.698 --rc geninfo_unexecuted_blocks=1 00:13:12.698 00:13:12.698 ' 00:13:12.698 06:43:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:12.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.698 --rc genhtml_branch_coverage=1 00:13:12.698 --rc genhtml_function_coverage=1 00:13:12.698 --rc genhtml_legend=1 00:13:12.698 --rc geninfo_all_blocks=1 00:13:12.698 --rc geninfo_unexecuted_blocks=1 00:13:12.698 00:13:12.698 ' 00:13:12.698 06:43:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:12.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.698 --rc genhtml_branch_coverage=1 00:13:12.698 --rc genhtml_function_coverage=1 00:13:12.698 --rc genhtml_legend=1 00:13:12.698 --rc geninfo_all_blocks=1 00:13:12.698 --rc geninfo_unexecuted_blocks=1 00:13:12.698 00:13:12.698 ' 00:13:12.698 06:43:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:12.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.698 --rc genhtml_branch_coverage=1 00:13:12.698 --rc genhtml_function_coverage=1 00:13:12.698 --rc genhtml_legend=1 00:13:12.698 --rc geninfo_all_blocks=1 00:13:12.698 --rc geninfo_unexecuted_blocks=1 00:13:12.698 00:13:12.698 ' 00:13:12.698 06:43:26 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.698 06:43:26 -- nvmf/common.sh@7 -- # uname -s 00:13:12.698 06:43:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.698 06:43:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.698 06:43:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.698 06:43:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.698 06:43:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.698 06:43:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.698 06:43:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.698 06:43:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.698 06:43:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.698 06:43:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:12.698 06:43:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:12.698 06:43:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.698 06:43:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.698 06:43:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.698 06:43:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.698 06:43:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.698 06:43:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.698 06:43:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.698 06:43:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.698 06:43:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.698 06:43:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.698 06:43:26 -- paths/export.sh@5 -- # export PATH 00:13:12.698 06:43:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.698 06:43:26 -- nvmf/common.sh@46 -- # : 0 00:13:12.698 06:43:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:12.698 06:43:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:12.698 06:43:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:12.698 06:43:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.698 06:43:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.698 06:43:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:12.698 06:43:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:12.698 06:43:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:12.698 06:43:26 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:12.698 06:43:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:12.698 06:43:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.698 06:43:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:12.698 06:43:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:12.698 06:43:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:12.698 06:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.698 06:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.698 06:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.698 06:43:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:12.698 06:43:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:12.698 06:43:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.698 06:43:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.698 06:43:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:12.698 06:43:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:12.698 06:43:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.698 06:43:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.698 06:43:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.698 06:43:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.698 06:43:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.698 06:43:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.698 06:43:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.698 06:43:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.698 06:43:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:12.698 06:43:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:12.698 Cannot find device "nvmf_tgt_br" 00:13:12.698 06:43:26 -- nvmf/common.sh@154 -- # true 00:13:12.698 06:43:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.698 Cannot find device "nvmf_tgt_br2" 00:13:12.698 06:43:26 -- nvmf/common.sh@155 -- # true 00:13:12.698 06:43:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:12.698 06:43:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:12.698 Cannot find device "nvmf_tgt_br" 00:13:12.698 06:43:26 -- nvmf/common.sh@157 -- # true 00:13:12.698 06:43:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:12.698 Cannot find device "nvmf_tgt_br2" 00:13:12.698 06:43:26 -- nvmf/common.sh@158 -- # true 00:13:12.698 06:43:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:12.958 06:43:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:12.958 06:43:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.958 06:43:26 -- nvmf/common.sh@161 -- # true 00:13:12.958 06:43:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.958 06:43:26 -- nvmf/common.sh@162 -- # true 00:13:12.958 06:43:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.958 06:43:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.958 06:43:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.958 06:43:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.958 06:43:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.958 06:43:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.958 06:43:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.958 06:43:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.958 06:43:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.958 06:43:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:12.958 06:43:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:12.958 06:43:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:12.958 06:43:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:12.958 06:43:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.958 06:43:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.958 06:43:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.958 06:43:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:12.958 06:43:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:12.958 06:43:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.958 06:43:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.958 06:43:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.958 06:43:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.958 06:43:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.958 06:43:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:12.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:12.958 00:13:12.958 --- 10.0.0.2 ping statistics --- 00:13:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.958 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:12.958 06:43:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:12.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:13:12.958 00:13:12.958 --- 10.0.0.3 ping statistics --- 00:13:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.958 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:12.958 06:43:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:12.958 00:13:12.958 --- 10.0.0.1 ping statistics --- 00:13:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.958 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:12.958 06:43:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.958 06:43:26 -- nvmf/common.sh@421 -- # return 0 00:13:12.958 06:43:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:12.958 06:43:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.958 06:43:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:12.958 06:43:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:12.958 06:43:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.958 06:43:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:12.958 06:43:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.217 06:43:26 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:13.217 06:43:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.217 06:43:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.217 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.217 06:43:26 -- nvmf/common.sh@469 -- # nvmfpid=70020 00:13:13.217 06:43:26 -- nvmf/common.sh@470 -- # waitforlisten 70020 00:13:13.217 06:43:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.217 06:43:26 -- common/autotest_common.sh@829 -- # '[' -z 70020 ']' 00:13:13.217 06:43:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.217 06:43:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.217 06:43:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.217 06:43:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.217 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.217 [2024-12-14 06:43:27.040761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.217 [2024-12-14 06:43:27.040850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.217 [2024-12-14 06:43:27.185324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.476 [2024-12-14 06:43:27.309874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.476 [2024-12-14 06:43:27.310077] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.476 [2024-12-14 06:43:27.310095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.476 [2024-12-14 06:43:27.310106] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.476 [2024-12-14 06:43:27.310285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.476 [2024-12-14 06:43:27.310761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.476 [2024-12-14 06:43:27.310816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.412 06:43:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.412 06:43:28 -- common/autotest_common.sh@862 -- # return 0 00:13:14.412 06:43:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:14.412 06:43:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.412 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.412 06:43:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.412 06:43:28 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.412 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.412 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.412 [2024-12-14 06:43:28.112615] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.412 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.412 06:43:28 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:14.412 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.412 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.412 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.412 06:43:28 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.412 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.412 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.412 [2024-12-14 06:43:28.132774] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.412 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.412 06:43:28 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:14.412 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.412 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.412 NULL1 00:13:14.412 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.412 06:43:28 -- target/connect_stress.sh@21 -- # PERF_PID=70072 00:13:14.412 06:43:28 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:14.412 06:43:28 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.412 06:43:28 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.413 06:43:28 -- target/connect_stress.sh@28 -- # cat 00:13:14.413 06:43:28 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:14.413 06:43:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.413 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.413 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.671 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.671 06:43:28 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:14.671 06:43:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.671 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.671 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.930 06:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.930 06:43:28 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:14.930 06:43:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.930 06:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.930 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.498 06:43:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.498 06:43:29 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:15.498 06:43:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.498 06:43:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.498 06:43:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.756 06:43:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.756 06:43:29 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:15.756 06:43:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.756 06:43:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.756 06:43:29 -- common/autotest_common.sh@10 -- # set +x 00:13:16.014 06:43:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.014 06:43:29 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:16.014 06:43:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.014 06:43:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.014 06:43:29 -- common/autotest_common.sh@10 -- # set +x 00:13:16.272 06:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.272 06:43:30 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:16.272 06:43:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.272 06:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.272 06:43:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.531 06:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.531 06:43:30 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:16.531 06:43:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.531 06:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.531 06:43:30 -- common/autotest_common.sh@10 -- # set +x 00:13:17.098 06:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.098 06:43:30 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:17.098 06:43:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.098 06:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.098 06:43:30 -- common/autotest_common.sh@10 -- # set +x 00:13:17.357 06:43:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.357 06:43:31 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:17.357 06:43:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.357 06:43:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.357 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:17.616 06:43:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.616 06:43:31 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:17.616 06:43:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.616 06:43:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.616 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:17.874 06:43:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.874 06:43:31 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:17.874 06:43:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.874 06:43:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.874 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.135 06:43:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.135 06:43:32 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:18.135 06:43:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.135 06:43:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.135 06:43:32 -- common/autotest_common.sh@10 -- # set +x 00:13:18.748 06:43:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.748 06:43:32 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:18.748 06:43:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.748 06:43:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.748 06:43:32 -- common/autotest_common.sh@10 -- # set +x 00:13:19.007 06:43:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.007 06:43:32 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:19.007 06:43:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.007 06:43:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.007 06:43:32 -- common/autotest_common.sh@10 -- # set +x 00:13:19.265 06:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.265 06:43:33 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:19.265 06:43:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.265 06:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.265 06:43:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.523 06:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.523 06:43:33 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:19.523 06:43:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.523 06:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.523 06:43:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.781 06:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.781 06:43:33 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:19.781 06:43:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.781 06:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.781 06:43:33 -- common/autotest_common.sh@10 -- # set +x 00:13:20.348 06:43:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.348 06:43:34 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:20.348 06:43:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.348 06:43:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.348 06:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:20.607 06:43:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.607 06:43:34 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:20.607 06:43:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.607 06:43:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.607 06:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:20.865 06:43:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.866 06:43:34 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:20.866 06:43:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.866 06:43:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.866 06:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.124 06:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.124 06:43:35 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:21.124 06:43:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.124 06:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.124 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:13:21.382 06:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.382 06:43:35 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:21.382 06:43:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.382 06:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.382 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:13:21.948 06:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.948 06:43:35 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:21.948 06:43:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.948 06:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.948 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:13:22.206 06:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.206 06:43:35 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:22.206 06:43:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.206 06:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.206 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:13:22.465 06:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.465 06:43:36 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:22.465 06:43:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.465 06:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.465 06:43:36 -- common/autotest_common.sh@10 -- # set +x 00:13:22.724 06:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.724 06:43:36 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:22.724 06:43:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.724 06:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.724 06:43:36 -- common/autotest_common.sh@10 -- # set +x 00:13:22.983 06:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.983 06:43:36 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:22.983 06:43:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.983 06:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.983 06:43:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.550 06:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.550 06:43:37 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:23.550 06:43:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.550 06:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.550 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.809 06:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.809 06:43:37 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:23.809 06:43:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.809 06:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.809 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.068 06:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.068 06:43:37 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:24.068 06:43:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.068 06:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.068 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.327 06:43:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.327 06:43:38 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:24.327 06:43:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.327 06:43:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.327 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.585 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.585 06:43:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.585 06:43:38 -- target/connect_stress.sh@34 -- # kill -0 70072 00:13:24.585 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70072) - No such process 00:13:24.585 06:43:38 -- target/connect_stress.sh@38 -- # wait 70072 00:13:24.585 06:43:38 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:24.585 06:43:38 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:24.585 06:43:38 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:24.585 06:43:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:24.585 06:43:38 -- nvmf/common.sh@116 -- # sync 00:13:24.844 06:43:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:24.844 06:43:38 -- nvmf/common.sh@119 -- # set +e 00:13:24.844 06:43:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:24.844 06:43:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:24.845 rmmod nvme_tcp 00:13:24.845 rmmod nvme_fabrics 00:13:24.845 rmmod nvme_keyring 00:13:24.845 06:43:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:24.845 06:43:38 -- nvmf/common.sh@123 -- # set -e 00:13:24.845 06:43:38 -- nvmf/common.sh@124 -- # return 0 00:13:24.845 06:43:38 -- nvmf/common.sh@477 -- # '[' -n 70020 ']' 00:13:24.845 06:43:38 -- nvmf/common.sh@478 -- # killprocess 70020 00:13:24.845 06:43:38 -- common/autotest_common.sh@936 -- # '[' -z 70020 ']' 00:13:24.845 06:43:38 -- common/autotest_common.sh@940 -- # kill -0 70020 00:13:24.845 06:43:38 -- common/autotest_common.sh@941 -- # uname 00:13:24.845 06:43:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:24.845 06:43:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70020 00:13:24.845 killing process with pid 70020 00:13:24.845 06:43:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:24.845 06:43:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:24.845 06:43:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70020' 00:13:24.845 06:43:38 -- common/autotest_common.sh@955 -- # kill 70020 00:13:24.845 06:43:38 -- common/autotest_common.sh@960 -- # wait 70020 00:13:25.104 06:43:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:25.104 06:43:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:25.104 06:43:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:25.104 06:43:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.104 06:43:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:25.104 06:43:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.104 06:43:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.104 06:43:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.104 06:43:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:25.104 00:13:25.104 real 0m12.608s 00:13:25.104 user 0m41.670s 00:13:25.104 sys 0m3.306s 00:13:25.104 06:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:25.104 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.104 ************************************ 00:13:25.104 END TEST nvmf_connect_stress 00:13:25.104 ************************************ 00:13:25.104 06:43:39 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.104 06:43:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.104 06:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.104 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.104 ************************************ 00:13:25.104 START TEST nvmf_fused_ordering 00:13:25.104 ************************************ 00:13:25.104 06:43:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.363 * Looking for test storage... 00:13:25.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.363 06:43:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:25.363 06:43:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:25.363 06:43:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:25.363 06:43:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:25.363 06:43:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:25.363 06:43:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:25.363 06:43:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:25.363 06:43:39 -- scripts/common.sh@335 -- # IFS=.-: 00:13:25.363 06:43:39 -- scripts/common.sh@335 -- # read -ra ver1 00:13:25.363 06:43:39 -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.363 06:43:39 -- scripts/common.sh@336 -- # read -ra ver2 00:13:25.363 06:43:39 -- scripts/common.sh@337 -- # local 'op=<' 00:13:25.363 06:43:39 -- scripts/common.sh@339 -- # ver1_l=2 00:13:25.363 06:43:39 -- scripts/common.sh@340 -- # ver2_l=1 00:13:25.363 06:43:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:25.363 06:43:39 -- scripts/common.sh@343 -- # case "$op" in 00:13:25.363 06:43:39 -- scripts/common.sh@344 -- # : 1 00:13:25.363 06:43:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:25.363 06:43:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.363 06:43:39 -- scripts/common.sh@364 -- # decimal 1 00:13:25.363 06:43:39 -- scripts/common.sh@352 -- # local d=1 00:13:25.363 06:43:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.363 06:43:39 -- scripts/common.sh@354 -- # echo 1 00:13:25.363 06:43:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:25.363 06:43:39 -- scripts/common.sh@365 -- # decimal 2 00:13:25.363 06:43:39 -- scripts/common.sh@352 -- # local d=2 00:13:25.363 06:43:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.363 06:43:39 -- scripts/common.sh@354 -- # echo 2 00:13:25.363 06:43:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:25.363 06:43:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:25.363 06:43:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:25.363 06:43:39 -- scripts/common.sh@367 -- # return 0 00:13:25.363 06:43:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.363 06:43:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:25.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.363 --rc genhtml_branch_coverage=1 00:13:25.363 --rc genhtml_function_coverage=1 00:13:25.363 --rc genhtml_legend=1 00:13:25.363 --rc geninfo_all_blocks=1 00:13:25.363 --rc geninfo_unexecuted_blocks=1 00:13:25.363 00:13:25.363 ' 00:13:25.363 06:43:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:25.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.363 --rc genhtml_branch_coverage=1 00:13:25.363 --rc genhtml_function_coverage=1 00:13:25.363 --rc genhtml_legend=1 00:13:25.363 --rc geninfo_all_blocks=1 00:13:25.363 --rc geninfo_unexecuted_blocks=1 00:13:25.363 00:13:25.363 ' 00:13:25.363 06:43:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:25.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.363 --rc genhtml_branch_coverage=1 00:13:25.363 --rc genhtml_function_coverage=1 00:13:25.363 --rc genhtml_legend=1 00:13:25.363 --rc geninfo_all_blocks=1 00:13:25.363 --rc geninfo_unexecuted_blocks=1 00:13:25.363 00:13:25.363 ' 00:13:25.363 06:43:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:25.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.363 --rc genhtml_branch_coverage=1 00:13:25.363 --rc genhtml_function_coverage=1 00:13:25.363 --rc genhtml_legend=1 00:13:25.363 --rc geninfo_all_blocks=1 00:13:25.363 --rc geninfo_unexecuted_blocks=1 00:13:25.363 00:13:25.363 ' 00:13:25.363 06:43:39 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.363 06:43:39 -- nvmf/common.sh@7 -- # uname -s 00:13:25.363 06:43:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.363 06:43:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.363 06:43:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.363 06:43:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.363 06:43:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.363 06:43:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.363 06:43:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.363 06:43:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.363 06:43:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.363 06:43:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.363 06:43:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:25.363 06:43:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:25.363 06:43:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.363 06:43:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.363 06:43:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.363 06:43:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.363 06:43:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.363 06:43:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.363 06:43:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.363 06:43:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.363 06:43:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.363 06:43:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.363 06:43:39 -- paths/export.sh@5 -- # export PATH 00:13:25.364 06:43:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.364 06:43:39 -- nvmf/common.sh@46 -- # : 0 00:13:25.364 06:43:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.364 06:43:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.364 06:43:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.364 06:43:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.364 06:43:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.364 06:43:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.364 06:43:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.364 06:43:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.364 06:43:39 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:25.364 06:43:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.364 06:43:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.364 06:43:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.364 06:43:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.364 06:43:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.364 06:43:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.364 06:43:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.364 06:43:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.364 06:43:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:25.364 06:43:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:25.364 06:43:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:25.364 06:43:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:25.364 06:43:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:25.364 06:43:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:25.364 06:43:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.364 06:43:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.364 06:43:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:25.364 06:43:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:25.364 06:43:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.364 06:43:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.364 06:43:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.364 06:43:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.364 06:43:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.364 06:43:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.364 06:43:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.364 06:43:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.364 06:43:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:25.364 06:43:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:25.364 Cannot find device "nvmf_tgt_br" 00:13:25.364 06:43:39 -- nvmf/common.sh@154 -- # true 00:13:25.364 06:43:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.364 Cannot find device "nvmf_tgt_br2" 00:13:25.364 06:43:39 -- nvmf/common.sh@155 -- # true 00:13:25.364 06:43:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:25.364 06:43:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:25.364 Cannot find device "nvmf_tgt_br" 00:13:25.364 06:43:39 -- nvmf/common.sh@157 -- # true 00:13:25.364 06:43:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:25.364 Cannot find device "nvmf_tgt_br2" 00:13:25.623 06:43:39 -- nvmf/common.sh@158 -- # true 00:13:25.623 06:43:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:25.623 06:43:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:25.623 06:43:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.623 06:43:39 -- nvmf/common.sh@161 -- # true 00:13:25.623 06:43:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.623 06:43:39 -- nvmf/common.sh@162 -- # true 00:13:25.623 06:43:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.623 06:43:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.623 06:43:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.623 06:43:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.623 06:43:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.623 06:43:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.623 06:43:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.623 06:43:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:25.623 06:43:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:25.623 06:43:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:25.623 06:43:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:25.623 06:43:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:25.623 06:43:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:25.623 06:43:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.623 06:43:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.623 06:43:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.623 06:43:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:25.623 06:43:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:25.623 06:43:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.623 06:43:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.623 06:43:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.623 06:43:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.623 06:43:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.623 06:43:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:25.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:25.623 00:13:25.623 --- 10.0.0.2 ping statistics --- 00:13:25.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.623 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:25.623 06:43:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:25.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:25.623 00:13:25.623 --- 10.0.0.3 ping statistics --- 00:13:25.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.623 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:25.623 06:43:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:25.623 00:13:25.623 --- 10.0.0.1 ping statistics --- 00:13:25.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.623 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:25.623 06:43:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.623 06:43:39 -- nvmf/common.sh@421 -- # return 0 00:13:25.623 06:43:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.623 06:43:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.623 06:43:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.623 06:43:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.623 06:43:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.623 06:43:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.623 06:43:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.882 06:43:39 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:25.882 06:43:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.882 06:43:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.882 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.882 06:43:39 -- nvmf/common.sh@469 -- # nvmfpid=70408 00:13:25.882 06:43:39 -- nvmf/common.sh@470 -- # waitforlisten 70408 00:13:25.882 06:43:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.882 06:43:39 -- common/autotest_common.sh@829 -- # '[' -z 70408 ']' 00:13:25.882 06:43:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.882 06:43:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.882 06:43:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.882 06:43:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.882 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.882 [2024-12-14 06:43:39.694812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.882 [2024-12-14 06:43:39.694932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.882 [2024-12-14 06:43:39.835579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.140 [2024-12-14 06:43:39.931377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.140 [2024-12-14 06:43:39.931541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.140 [2024-12-14 06:43:39.931553] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.140 [2024-12-14 06:43:39.931561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.140 [2024-12-14 06:43:39.931594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.706 06:43:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.706 06:43:40 -- common/autotest_common.sh@862 -- # return 0 00:13:26.706 06:43:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.706 06:43:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.706 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.706 06:43:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.706 06:43:40 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.706 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.706 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.706 [2024-12-14 06:43:40.684815] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.706 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.706 06:43:40 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.706 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.706 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.965 06:43:40 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.965 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.965 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 [2024-12-14 06:43:40.700937] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.965 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.965 06:43:40 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.965 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.965 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 NULL1 00:13:26.965 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.965 06:43:40 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.965 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.965 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.965 06:43:40 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.965 06:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.965 06:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 06:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.965 06:43:40 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.965 [2024-12-14 06:43:40.754204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:26.965 [2024-12-14 06:43:40.754256] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70458 ] 00:13:27.224 Attached to nqn.2016-06.io.spdk:cnode1 00:13:27.224 Namespace ID: 1 size: 1GB 00:13:27.224 fused_ordering(0) 00:13:27.224 fused_ordering(1) 00:13:27.224 fused_ordering(2) 00:13:27.224 fused_ordering(3) 00:13:27.224 fused_ordering(4) 00:13:27.224 fused_ordering(5) 00:13:27.224 fused_ordering(6) 00:13:27.224 fused_ordering(7) 00:13:27.224 fused_ordering(8) 00:13:27.224 fused_ordering(9) 00:13:27.224 fused_ordering(10) 00:13:27.224 fused_ordering(11) 00:13:27.224 fused_ordering(12) 00:13:27.224 fused_ordering(13) 00:13:27.224 fused_ordering(14) 00:13:27.224 fused_ordering(15) 00:13:27.224 fused_ordering(16) 00:13:27.224 fused_ordering(17) 00:13:27.224 fused_ordering(18) 00:13:27.224 fused_ordering(19) 00:13:27.224 fused_ordering(20) 00:13:27.224 fused_ordering(21) 00:13:27.224 fused_ordering(22) 00:13:27.224 fused_ordering(23) 00:13:27.224 fused_ordering(24) 00:13:27.224 fused_ordering(25) 00:13:27.224 fused_ordering(26) 00:13:27.224 fused_ordering(27) 00:13:27.224 fused_ordering(28) 00:13:27.224 fused_ordering(29) 00:13:27.224 fused_ordering(30) 00:13:27.224 fused_ordering(31) 00:13:27.224 fused_ordering(32) 00:13:27.224 fused_ordering(33) 00:13:27.224 fused_ordering(34) 00:13:27.224 fused_ordering(35) 00:13:27.224 fused_ordering(36) 00:13:27.224 fused_ordering(37) 00:13:27.224 fused_ordering(38) 00:13:27.224 fused_ordering(39) 00:13:27.224 fused_ordering(40) 00:13:27.224 fused_ordering(41) 00:13:27.224 fused_ordering(42) 00:13:27.224 fused_ordering(43) 00:13:27.224 fused_ordering(44) 00:13:27.224 fused_ordering(45) 00:13:27.224 fused_ordering(46) 00:13:27.224 fused_ordering(47) 00:13:27.225 fused_ordering(48) 00:13:27.225 fused_ordering(49) 00:13:27.225 fused_ordering(50) 00:13:27.225 fused_ordering(51) 00:13:27.225 fused_ordering(52) 00:13:27.225 fused_ordering(53) 00:13:27.225 fused_ordering(54) 00:13:27.225 fused_ordering(55) 00:13:27.225 fused_ordering(56) 00:13:27.225 fused_ordering(57) 00:13:27.225 fused_ordering(58) 00:13:27.225 fused_ordering(59) 00:13:27.225 fused_ordering(60) 00:13:27.225 fused_ordering(61) 00:13:27.225 fused_ordering(62) 00:13:27.225 fused_ordering(63) 00:13:27.225 fused_ordering(64) 00:13:27.225 fused_ordering(65) 00:13:27.225 fused_ordering(66) 00:13:27.225 fused_ordering(67) 00:13:27.225 fused_ordering(68) 00:13:27.225 fused_ordering(69) 00:13:27.225 fused_ordering(70) 00:13:27.225 fused_ordering(71) 00:13:27.225 fused_ordering(72) 00:13:27.225 fused_ordering(73) 00:13:27.225 fused_ordering(74) 00:13:27.225 fused_ordering(75) 00:13:27.225 fused_ordering(76) 00:13:27.225 fused_ordering(77) 00:13:27.225 fused_ordering(78) 00:13:27.225 fused_ordering(79) 00:13:27.225 fused_ordering(80) 00:13:27.225 fused_ordering(81) 00:13:27.225 fused_ordering(82) 00:13:27.225 fused_ordering(83) 00:13:27.225 fused_ordering(84) 00:13:27.225 fused_ordering(85) 00:13:27.225 fused_ordering(86) 00:13:27.225 fused_ordering(87) 00:13:27.225 fused_ordering(88) 00:13:27.225 fused_ordering(89) 00:13:27.225 fused_ordering(90) 00:13:27.225 fused_ordering(91) 00:13:27.225 fused_ordering(92) 00:13:27.225 fused_ordering(93) 00:13:27.225 fused_ordering(94) 00:13:27.225 fused_ordering(95) 00:13:27.225 fused_ordering(96) 00:13:27.225 fused_ordering(97) 00:13:27.225 fused_ordering(98) 00:13:27.225 fused_ordering(99) 00:13:27.225 fused_ordering(100) 00:13:27.225 fused_ordering(101) 00:13:27.225 fused_ordering(102) 00:13:27.225 fused_ordering(103) 00:13:27.225 fused_ordering(104) 00:13:27.225 fused_ordering(105) 00:13:27.225 fused_ordering(106) 00:13:27.225 fused_ordering(107) 00:13:27.225 fused_ordering(108) 00:13:27.225 fused_ordering(109) 00:13:27.225 fused_ordering(110) 00:13:27.225 fused_ordering(111) 00:13:27.225 fused_ordering(112) 00:13:27.225 fused_ordering(113) 00:13:27.225 fused_ordering(114) 00:13:27.225 fused_ordering(115) 00:13:27.225 fused_ordering(116) 00:13:27.225 fused_ordering(117) 00:13:27.225 fused_ordering(118) 00:13:27.225 fused_ordering(119) 00:13:27.225 fused_ordering(120) 00:13:27.225 fused_ordering(121) 00:13:27.225 fused_ordering(122) 00:13:27.225 fused_ordering(123) 00:13:27.225 fused_ordering(124) 00:13:27.225 fused_ordering(125) 00:13:27.225 fused_ordering(126) 00:13:27.225 fused_ordering(127) 00:13:27.225 fused_ordering(128) 00:13:27.225 fused_ordering(129) 00:13:27.225 fused_ordering(130) 00:13:27.225 fused_ordering(131) 00:13:27.225 fused_ordering(132) 00:13:27.225 fused_ordering(133) 00:13:27.225 fused_ordering(134) 00:13:27.225 fused_ordering(135) 00:13:27.225 fused_ordering(136) 00:13:27.225 fused_ordering(137) 00:13:27.225 fused_ordering(138) 00:13:27.225 fused_ordering(139) 00:13:27.225 fused_ordering(140) 00:13:27.225 fused_ordering(141) 00:13:27.225 fused_ordering(142) 00:13:27.225 fused_ordering(143) 00:13:27.225 fused_ordering(144) 00:13:27.225 fused_ordering(145) 00:13:27.225 fused_ordering(146) 00:13:27.225 fused_ordering(147) 00:13:27.225 fused_ordering(148) 00:13:27.225 fused_ordering(149) 00:13:27.225 fused_ordering(150) 00:13:27.225 fused_ordering(151) 00:13:27.225 fused_ordering(152) 00:13:27.225 fused_ordering(153) 00:13:27.225 fused_ordering(154) 00:13:27.225 fused_ordering(155) 00:13:27.225 fused_ordering(156) 00:13:27.225 fused_ordering(157) 00:13:27.225 fused_ordering(158) 00:13:27.225 fused_ordering(159) 00:13:27.225 fused_ordering(160) 00:13:27.225 fused_ordering(161) 00:13:27.225 fused_ordering(162) 00:13:27.225 fused_ordering(163) 00:13:27.225 fused_ordering(164) 00:13:27.225 fused_ordering(165) 00:13:27.225 fused_ordering(166) 00:13:27.225 fused_ordering(167) 00:13:27.225 fused_ordering(168) 00:13:27.225 fused_ordering(169) 00:13:27.225 fused_ordering(170) 00:13:27.225 fused_ordering(171) 00:13:27.225 fused_ordering(172) 00:13:27.225 fused_ordering(173) 00:13:27.225 fused_ordering(174) 00:13:27.225 fused_ordering(175) 00:13:27.225 fused_ordering(176) 00:13:27.225 fused_ordering(177) 00:13:27.225 fused_ordering(178) 00:13:27.225 fused_ordering(179) 00:13:27.225 fused_ordering(180) 00:13:27.225 fused_ordering(181) 00:13:27.225 fused_ordering(182) 00:13:27.225 fused_ordering(183) 00:13:27.225 fused_ordering(184) 00:13:27.225 fused_ordering(185) 00:13:27.225 fused_ordering(186) 00:13:27.225 fused_ordering(187) 00:13:27.225 fused_ordering(188) 00:13:27.225 fused_ordering(189) 00:13:27.225 fused_ordering(190) 00:13:27.225 fused_ordering(191) 00:13:27.225 fused_ordering(192) 00:13:27.225 fused_ordering(193) 00:13:27.225 fused_ordering(194) 00:13:27.225 fused_ordering(195) 00:13:27.225 fused_ordering(196) 00:13:27.225 fused_ordering(197) 00:13:27.225 fused_ordering(198) 00:13:27.225 fused_ordering(199) 00:13:27.225 fused_ordering(200) 00:13:27.225 fused_ordering(201) 00:13:27.225 fused_ordering(202) 00:13:27.225 fused_ordering(203) 00:13:27.225 fused_ordering(204) 00:13:27.225 fused_ordering(205) 00:13:27.484 fused_ordering(206) 00:13:27.484 fused_ordering(207) 00:13:27.484 fused_ordering(208) 00:13:27.484 fused_ordering(209) 00:13:27.484 fused_ordering(210) 00:13:27.484 fused_ordering(211) 00:13:27.484 fused_ordering(212) 00:13:27.484 fused_ordering(213) 00:13:27.484 fused_ordering(214) 00:13:27.484 fused_ordering(215) 00:13:27.484 fused_ordering(216) 00:13:27.484 fused_ordering(217) 00:13:27.484 fused_ordering(218) 00:13:27.484 fused_ordering(219) 00:13:27.484 fused_ordering(220) 00:13:27.484 fused_ordering(221) 00:13:27.484 fused_ordering(222) 00:13:27.484 fused_ordering(223) 00:13:27.484 fused_ordering(224) 00:13:27.484 fused_ordering(225) 00:13:27.484 fused_ordering(226) 00:13:27.484 fused_ordering(227) 00:13:27.484 fused_ordering(228) 00:13:27.484 fused_ordering(229) 00:13:27.484 fused_ordering(230) 00:13:27.484 fused_ordering(231) 00:13:27.484 fused_ordering(232) 00:13:27.484 fused_ordering(233) 00:13:27.484 fused_ordering(234) 00:13:27.484 fused_ordering(235) 00:13:27.484 fused_ordering(236) 00:13:27.484 fused_ordering(237) 00:13:27.484 fused_ordering(238) 00:13:27.484 fused_ordering(239) 00:13:27.484 fused_ordering(240) 00:13:27.484 fused_ordering(241) 00:13:27.484 fused_ordering(242) 00:13:27.484 fused_ordering(243) 00:13:27.484 fused_ordering(244) 00:13:27.484 fused_ordering(245) 00:13:27.484 fused_ordering(246) 00:13:27.484 fused_ordering(247) 00:13:27.484 fused_ordering(248) 00:13:27.484 fused_ordering(249) 00:13:27.484 fused_ordering(250) 00:13:27.484 fused_ordering(251) 00:13:27.484 fused_ordering(252) 00:13:27.484 fused_ordering(253) 00:13:27.484 fused_ordering(254) 00:13:27.484 fused_ordering(255) 00:13:27.484 fused_ordering(256) 00:13:27.484 fused_ordering(257) 00:13:27.484 fused_ordering(258) 00:13:27.484 fused_ordering(259) 00:13:27.484 fused_ordering(260) 00:13:27.484 fused_ordering(261) 00:13:27.484 fused_ordering(262) 00:13:27.484 fused_ordering(263) 00:13:27.484 fused_ordering(264) 00:13:27.484 fused_ordering(265) 00:13:27.484 fused_ordering(266) 00:13:27.484 fused_ordering(267) 00:13:27.484 fused_ordering(268) 00:13:27.484 fused_ordering(269) 00:13:27.484 fused_ordering(270) 00:13:27.484 fused_ordering(271) 00:13:27.484 fused_ordering(272) 00:13:27.484 fused_ordering(273) 00:13:27.484 fused_ordering(274) 00:13:27.484 fused_ordering(275) 00:13:27.484 fused_ordering(276) 00:13:27.484 fused_ordering(277) 00:13:27.484 fused_ordering(278) 00:13:27.484 fused_ordering(279) 00:13:27.484 fused_ordering(280) 00:13:27.484 fused_ordering(281) 00:13:27.484 fused_ordering(282) 00:13:27.484 fused_ordering(283) 00:13:27.484 fused_ordering(284) 00:13:27.484 fused_ordering(285) 00:13:27.484 fused_ordering(286) 00:13:27.484 fused_ordering(287) 00:13:27.484 fused_ordering(288) 00:13:27.484 fused_ordering(289) 00:13:27.484 fused_ordering(290) 00:13:27.484 fused_ordering(291) 00:13:27.484 fused_ordering(292) 00:13:27.484 fused_ordering(293) 00:13:27.484 fused_ordering(294) 00:13:27.484 fused_ordering(295) 00:13:27.484 fused_ordering(296) 00:13:27.484 fused_ordering(297) 00:13:27.484 fused_ordering(298) 00:13:27.484 fused_ordering(299) 00:13:27.484 fused_ordering(300) 00:13:27.484 fused_ordering(301) 00:13:27.484 fused_ordering(302) 00:13:27.484 fused_ordering(303) 00:13:27.484 fused_ordering(304) 00:13:27.484 fused_ordering(305) 00:13:27.484 fused_ordering(306) 00:13:27.484 fused_ordering(307) 00:13:27.484 fused_ordering(308) 00:13:27.484 fused_ordering(309) 00:13:27.484 fused_ordering(310) 00:13:27.484 fused_ordering(311) 00:13:27.484 fused_ordering(312) 00:13:27.484 fused_ordering(313) 00:13:27.484 fused_ordering(314) 00:13:27.484 fused_ordering(315) 00:13:27.484 fused_ordering(316) 00:13:27.484 fused_ordering(317) 00:13:27.484 fused_ordering(318) 00:13:27.484 fused_ordering(319) 00:13:27.484 fused_ordering(320) 00:13:27.484 fused_ordering(321) 00:13:27.484 fused_ordering(322) 00:13:27.484 fused_ordering(323) 00:13:27.484 fused_ordering(324) 00:13:27.484 fused_ordering(325) 00:13:27.484 fused_ordering(326) 00:13:27.484 fused_ordering(327) 00:13:27.484 fused_ordering(328) 00:13:27.484 fused_ordering(329) 00:13:27.484 fused_ordering(330) 00:13:27.484 fused_ordering(331) 00:13:27.484 fused_ordering(332) 00:13:27.484 fused_ordering(333) 00:13:27.484 fused_ordering(334) 00:13:27.484 fused_ordering(335) 00:13:27.484 fused_ordering(336) 00:13:27.484 fused_ordering(337) 00:13:27.484 fused_ordering(338) 00:13:27.484 fused_ordering(339) 00:13:27.484 fused_ordering(340) 00:13:27.484 fused_ordering(341) 00:13:27.484 fused_ordering(342) 00:13:27.484 fused_ordering(343) 00:13:27.484 fused_ordering(344) 00:13:27.484 fused_ordering(345) 00:13:27.484 fused_ordering(346) 00:13:27.484 fused_ordering(347) 00:13:27.484 fused_ordering(348) 00:13:27.484 fused_ordering(349) 00:13:27.484 fused_ordering(350) 00:13:27.484 fused_ordering(351) 00:13:27.484 fused_ordering(352) 00:13:27.484 fused_ordering(353) 00:13:27.484 fused_ordering(354) 00:13:27.484 fused_ordering(355) 00:13:27.484 fused_ordering(356) 00:13:27.484 fused_ordering(357) 00:13:27.484 fused_ordering(358) 00:13:27.484 fused_ordering(359) 00:13:27.484 fused_ordering(360) 00:13:27.484 fused_ordering(361) 00:13:27.484 fused_ordering(362) 00:13:27.484 fused_ordering(363) 00:13:27.484 fused_ordering(364) 00:13:27.484 fused_ordering(365) 00:13:27.484 fused_ordering(366) 00:13:27.484 fused_ordering(367) 00:13:27.484 fused_ordering(368) 00:13:27.484 fused_ordering(369) 00:13:27.484 fused_ordering(370) 00:13:27.484 fused_ordering(371) 00:13:27.484 fused_ordering(372) 00:13:27.484 fused_ordering(373) 00:13:27.484 fused_ordering(374) 00:13:27.484 fused_ordering(375) 00:13:27.484 fused_ordering(376) 00:13:27.484 fused_ordering(377) 00:13:27.484 fused_ordering(378) 00:13:27.484 fused_ordering(379) 00:13:27.484 fused_ordering(380) 00:13:27.484 fused_ordering(381) 00:13:27.484 fused_ordering(382) 00:13:27.484 fused_ordering(383) 00:13:27.484 fused_ordering(384) 00:13:27.484 fused_ordering(385) 00:13:27.484 fused_ordering(386) 00:13:27.484 fused_ordering(387) 00:13:27.484 fused_ordering(388) 00:13:27.484 fused_ordering(389) 00:13:27.484 fused_ordering(390) 00:13:27.484 fused_ordering(391) 00:13:27.484 fused_ordering(392) 00:13:27.484 fused_ordering(393) 00:13:27.484 fused_ordering(394) 00:13:27.484 fused_ordering(395) 00:13:27.484 fused_ordering(396) 00:13:27.484 fused_ordering(397) 00:13:27.484 fused_ordering(398) 00:13:27.484 fused_ordering(399) 00:13:27.484 fused_ordering(400) 00:13:27.484 fused_ordering(401) 00:13:27.484 fused_ordering(402) 00:13:27.484 fused_ordering(403) 00:13:27.484 fused_ordering(404) 00:13:27.484 fused_ordering(405) 00:13:27.484 fused_ordering(406) 00:13:27.484 fused_ordering(407) 00:13:27.484 fused_ordering(408) 00:13:27.484 fused_ordering(409) 00:13:27.484 fused_ordering(410) 00:13:27.743 fused_ordering(411) 00:13:27.743 fused_ordering(412) 00:13:27.743 fused_ordering(413) 00:13:27.743 fused_ordering(414) 00:13:27.743 fused_ordering(415) 00:13:27.743 fused_ordering(416) 00:13:27.743 fused_ordering(417) 00:13:27.743 fused_ordering(418) 00:13:27.743 fused_ordering(419) 00:13:27.743 fused_ordering(420) 00:13:27.743 fused_ordering(421) 00:13:27.743 fused_ordering(422) 00:13:27.743 fused_ordering(423) 00:13:27.743 fused_ordering(424) 00:13:27.743 fused_ordering(425) 00:13:27.743 fused_ordering(426) 00:13:27.743 fused_ordering(427) 00:13:27.743 fused_ordering(428) 00:13:27.743 fused_ordering(429) 00:13:27.743 fused_ordering(430) 00:13:27.743 fused_ordering(431) 00:13:27.743 fused_ordering(432) 00:13:27.743 fused_ordering(433) 00:13:27.743 fused_ordering(434) 00:13:27.743 fused_ordering(435) 00:13:27.743 fused_ordering(436) 00:13:27.743 fused_ordering(437) 00:13:27.743 fused_ordering(438) 00:13:27.743 fused_ordering(439) 00:13:27.743 fused_ordering(440) 00:13:27.743 fused_ordering(441) 00:13:27.743 fused_ordering(442) 00:13:27.743 fused_ordering(443) 00:13:27.743 fused_ordering(444) 00:13:27.743 fused_ordering(445) 00:13:27.743 fused_ordering(446) 00:13:27.743 fused_ordering(447) 00:13:27.743 fused_ordering(448) 00:13:27.743 fused_ordering(449) 00:13:27.743 fused_ordering(450) 00:13:27.743 fused_ordering(451) 00:13:27.743 fused_ordering(452) 00:13:27.743 fused_ordering(453) 00:13:27.743 fused_ordering(454) 00:13:27.743 fused_ordering(455) 00:13:27.743 fused_ordering(456) 00:13:27.743 fused_ordering(457) 00:13:27.743 fused_ordering(458) 00:13:27.743 fused_ordering(459) 00:13:27.743 fused_ordering(460) 00:13:27.743 fused_ordering(461) 00:13:27.743 fused_ordering(462) 00:13:27.743 fused_ordering(463) 00:13:27.743 fused_ordering(464) 00:13:27.743 fused_ordering(465) 00:13:27.743 fused_ordering(466) 00:13:27.743 fused_ordering(467) 00:13:27.743 fused_ordering(468) 00:13:27.743 fused_ordering(469) 00:13:27.743 fused_ordering(470) 00:13:27.743 fused_ordering(471) 00:13:27.743 fused_ordering(472) 00:13:27.743 fused_ordering(473) 00:13:27.743 fused_ordering(474) 00:13:27.743 fused_ordering(475) 00:13:27.743 fused_ordering(476) 00:13:27.743 fused_ordering(477) 00:13:27.743 fused_ordering(478) 00:13:27.743 fused_ordering(479) 00:13:27.743 fused_ordering(480) 00:13:27.743 fused_ordering(481) 00:13:27.743 fused_ordering(482) 00:13:27.743 fused_ordering(483) 00:13:27.743 fused_ordering(484) 00:13:27.743 fused_ordering(485) 00:13:27.743 fused_ordering(486) 00:13:27.743 fused_ordering(487) 00:13:27.743 fused_ordering(488) 00:13:27.743 fused_ordering(489) 00:13:27.743 fused_ordering(490) 00:13:27.743 fused_ordering(491) 00:13:27.743 fused_ordering(492) 00:13:27.743 fused_ordering(493) 00:13:27.743 fused_ordering(494) 00:13:27.743 fused_ordering(495) 00:13:27.744 fused_ordering(496) 00:13:27.744 fused_ordering(497) 00:13:27.744 fused_ordering(498) 00:13:27.744 fused_ordering(499) 00:13:27.744 fused_ordering(500) 00:13:27.744 fused_ordering(501) 00:13:27.744 fused_ordering(502) 00:13:27.744 fused_ordering(503) 00:13:27.744 fused_ordering(504) 00:13:27.744 fused_ordering(505) 00:13:27.744 fused_ordering(506) 00:13:27.744 fused_ordering(507) 00:13:27.744 fused_ordering(508) 00:13:27.744 fused_ordering(509) 00:13:27.744 fused_ordering(510) 00:13:27.744 fused_ordering(511) 00:13:27.744 fused_ordering(512) 00:13:27.744 fused_ordering(513) 00:13:27.744 fused_ordering(514) 00:13:27.744 fused_ordering(515) 00:13:27.744 fused_ordering(516) 00:13:27.744 fused_ordering(517) 00:13:27.744 fused_ordering(518) 00:13:27.744 fused_ordering(519) 00:13:27.744 fused_ordering(520) 00:13:27.744 fused_ordering(521) 00:13:27.744 fused_ordering(522) 00:13:27.744 fused_ordering(523) 00:13:27.744 fused_ordering(524) 00:13:27.744 fused_ordering(525) 00:13:27.744 fused_ordering(526) 00:13:27.744 fused_ordering(527) 00:13:27.744 fused_ordering(528) 00:13:27.744 fused_ordering(529) 00:13:27.744 fused_ordering(530) 00:13:27.744 fused_ordering(531) 00:13:27.744 fused_ordering(532) 00:13:27.744 fused_ordering(533) 00:13:27.744 fused_ordering(534) 00:13:27.744 fused_ordering(535) 00:13:27.744 fused_ordering(536) 00:13:27.744 fused_ordering(537) 00:13:27.744 fused_ordering(538) 00:13:27.744 fused_ordering(539) 00:13:27.744 fused_ordering(540) 00:13:27.744 fused_ordering(541) 00:13:27.744 fused_ordering(542) 00:13:27.744 fused_ordering(543) 00:13:27.744 fused_ordering(544) 00:13:27.744 fused_ordering(545) 00:13:27.744 fused_ordering(546) 00:13:27.744 fused_ordering(547) 00:13:27.744 fused_ordering(548) 00:13:27.744 fused_ordering(549) 00:13:27.744 fused_ordering(550) 00:13:27.744 fused_ordering(551) 00:13:27.744 fused_ordering(552) 00:13:27.744 fused_ordering(553) 00:13:27.744 fused_ordering(554) 00:13:27.744 fused_ordering(555) 00:13:27.744 fused_ordering(556) 00:13:27.744 fused_ordering(557) 00:13:27.744 fused_ordering(558) 00:13:27.744 fused_ordering(559) 00:13:27.744 fused_ordering(560) 00:13:27.744 fused_ordering(561) 00:13:27.744 fused_ordering(562) 00:13:27.744 fused_ordering(563) 00:13:27.744 fused_ordering(564) 00:13:27.744 fused_ordering(565) 00:13:27.744 fused_ordering(566) 00:13:27.744 fused_ordering(567) 00:13:27.744 fused_ordering(568) 00:13:27.744 fused_ordering(569) 00:13:27.744 fused_ordering(570) 00:13:27.744 fused_ordering(571) 00:13:27.744 fused_ordering(572) 00:13:27.744 fused_ordering(573) 00:13:27.744 fused_ordering(574) 00:13:27.744 fused_ordering(575) 00:13:27.744 fused_ordering(576) 00:13:27.744 fused_ordering(577) 00:13:27.744 fused_ordering(578) 00:13:27.744 fused_ordering(579) 00:13:27.744 fused_ordering(580) 00:13:27.744 fused_ordering(581) 00:13:27.744 fused_ordering(582) 00:13:27.744 fused_ordering(583) 00:13:27.744 fused_ordering(584) 00:13:27.744 fused_ordering(585) 00:13:27.744 fused_ordering(586) 00:13:27.744 fused_ordering(587) 00:13:27.744 fused_ordering(588) 00:13:27.744 fused_ordering(589) 00:13:27.744 fused_ordering(590) 00:13:27.744 fused_ordering(591) 00:13:27.744 fused_ordering(592) 00:13:27.744 fused_ordering(593) 00:13:27.744 fused_ordering(594) 00:13:27.744 fused_ordering(595) 00:13:27.744 fused_ordering(596) 00:13:27.744 fused_ordering(597) 00:13:27.744 fused_ordering(598) 00:13:27.744 fused_ordering(599) 00:13:27.744 fused_ordering(600) 00:13:27.744 fused_ordering(601) 00:13:27.744 fused_ordering(602) 00:13:27.744 fused_ordering(603) 00:13:27.744 fused_ordering(604) 00:13:27.744 fused_ordering(605) 00:13:27.744 fused_ordering(606) 00:13:27.744 fused_ordering(607) 00:13:27.744 fused_ordering(608) 00:13:27.744 fused_ordering(609) 00:13:27.744 fused_ordering(610) 00:13:27.744 fused_ordering(611) 00:13:27.744 fused_ordering(612) 00:13:27.744 fused_ordering(613) 00:13:27.744 fused_ordering(614) 00:13:27.744 fused_ordering(615) 00:13:28.311 fused_ordering(616) 00:13:28.311 fused_ordering(617) 00:13:28.311 fused_ordering(618) 00:13:28.311 fused_ordering(619) 00:13:28.311 fused_ordering(620) 00:13:28.311 fused_ordering(621) 00:13:28.311 fused_ordering(622) 00:13:28.311 fused_ordering(623) 00:13:28.311 fused_ordering(624) 00:13:28.311 fused_ordering(625) 00:13:28.311 fused_ordering(626) 00:13:28.311 fused_ordering(627) 00:13:28.311 fused_ordering(628) 00:13:28.311 fused_ordering(629) 00:13:28.311 fused_ordering(630) 00:13:28.311 fused_ordering(631) 00:13:28.311 fused_ordering(632) 00:13:28.311 fused_ordering(633) 00:13:28.311 fused_ordering(634) 00:13:28.311 fused_ordering(635) 00:13:28.311 fused_ordering(636) 00:13:28.311 fused_ordering(637) 00:13:28.311 fused_ordering(638) 00:13:28.311 fused_ordering(639) 00:13:28.311 fused_ordering(640) 00:13:28.311 fused_ordering(641) 00:13:28.311 fused_ordering(642) 00:13:28.311 fused_ordering(643) 00:13:28.311 fused_ordering(644) 00:13:28.311 fused_ordering(645) 00:13:28.311 fused_ordering(646) 00:13:28.311 fused_ordering(647) 00:13:28.311 fused_ordering(648) 00:13:28.311 fused_ordering(649) 00:13:28.311 fused_ordering(650) 00:13:28.311 fused_ordering(651) 00:13:28.311 fused_ordering(652) 00:13:28.311 fused_ordering(653) 00:13:28.311 fused_ordering(654) 00:13:28.311 fused_ordering(655) 00:13:28.311 fused_ordering(656) 00:13:28.311 fused_ordering(657) 00:13:28.311 fused_ordering(658) 00:13:28.311 fused_ordering(659) 00:13:28.311 fused_ordering(660) 00:13:28.311 fused_ordering(661) 00:13:28.311 fused_ordering(662) 00:13:28.311 fused_ordering(663) 00:13:28.311 fused_ordering(664) 00:13:28.311 fused_ordering(665) 00:13:28.311 fused_ordering(666) 00:13:28.311 fused_ordering(667) 00:13:28.311 fused_ordering(668) 00:13:28.311 fused_ordering(669) 00:13:28.311 fused_ordering(670) 00:13:28.311 fused_ordering(671) 00:13:28.311 fused_ordering(672) 00:13:28.311 fused_ordering(673) 00:13:28.311 fused_ordering(674) 00:13:28.311 fused_ordering(675) 00:13:28.311 fused_ordering(676) 00:13:28.311 fused_ordering(677) 00:13:28.311 fused_ordering(678) 00:13:28.311 fused_ordering(679) 00:13:28.311 fused_ordering(680) 00:13:28.311 fused_ordering(681) 00:13:28.311 fused_ordering(682) 00:13:28.311 fused_ordering(683) 00:13:28.311 fused_ordering(684) 00:13:28.311 fused_ordering(685) 00:13:28.311 fused_ordering(686) 00:13:28.311 fused_ordering(687) 00:13:28.311 fused_ordering(688) 00:13:28.311 fused_ordering(689) 00:13:28.311 fused_ordering(690) 00:13:28.311 fused_ordering(691) 00:13:28.311 fused_ordering(692) 00:13:28.311 fused_ordering(693) 00:13:28.311 fused_ordering(694) 00:13:28.311 fused_ordering(695) 00:13:28.311 fused_ordering(696) 00:13:28.311 fused_ordering(697) 00:13:28.311 fused_ordering(698) 00:13:28.311 fused_ordering(699) 00:13:28.311 fused_ordering(700) 00:13:28.311 fused_ordering(701) 00:13:28.311 fused_ordering(702) 00:13:28.311 fused_ordering(703) 00:13:28.311 fused_ordering(704) 00:13:28.311 fused_ordering(705) 00:13:28.311 fused_ordering(706) 00:13:28.311 fused_ordering(707) 00:13:28.311 fused_ordering(708) 00:13:28.311 fused_ordering(709) 00:13:28.311 fused_ordering(710) 00:13:28.311 fused_ordering(711) 00:13:28.311 fused_ordering(712) 00:13:28.311 fused_ordering(713) 00:13:28.311 fused_ordering(714) 00:13:28.311 fused_ordering(715) 00:13:28.311 fused_ordering(716) 00:13:28.311 fused_ordering(717) 00:13:28.311 fused_ordering(718) 00:13:28.312 fused_ordering(719) 00:13:28.312 fused_ordering(720) 00:13:28.312 fused_ordering(721) 00:13:28.312 fused_ordering(722) 00:13:28.312 fused_ordering(723) 00:13:28.312 fused_ordering(724) 00:13:28.312 fused_ordering(725) 00:13:28.312 fused_ordering(726) 00:13:28.312 fused_ordering(727) 00:13:28.312 fused_ordering(728) 00:13:28.312 fused_ordering(729) 00:13:28.312 fused_ordering(730) 00:13:28.312 fused_ordering(731) 00:13:28.312 fused_ordering(732) 00:13:28.312 fused_ordering(733) 00:13:28.312 fused_ordering(734) 00:13:28.312 fused_ordering(735) 00:13:28.312 fused_ordering(736) 00:13:28.312 fused_ordering(737) 00:13:28.312 fused_ordering(738) 00:13:28.312 fused_ordering(739) 00:13:28.312 fused_ordering(740) 00:13:28.312 fused_ordering(741) 00:13:28.312 fused_ordering(742) 00:13:28.312 fused_ordering(743) 00:13:28.312 fused_ordering(744) 00:13:28.312 fused_ordering(745) 00:13:28.312 fused_ordering(746) 00:13:28.312 fused_ordering(747) 00:13:28.312 fused_ordering(748) 00:13:28.312 fused_ordering(749) 00:13:28.312 fused_ordering(750) 00:13:28.312 fused_ordering(751) 00:13:28.312 fused_ordering(752) 00:13:28.312 fused_ordering(753) 00:13:28.312 fused_ordering(754) 00:13:28.312 fused_ordering(755) 00:13:28.312 fused_ordering(756) 00:13:28.312 fused_ordering(757) 00:13:28.312 fused_ordering(758) 00:13:28.312 fused_ordering(759) 00:13:28.312 fused_ordering(760) 00:13:28.312 fused_ordering(761) 00:13:28.312 fused_ordering(762) 00:13:28.312 fused_ordering(763) 00:13:28.312 fused_ordering(764) 00:13:28.312 fused_ordering(765) 00:13:28.312 fused_ordering(766) 00:13:28.312 fused_ordering(767) 00:13:28.312 fused_ordering(768) 00:13:28.312 fused_ordering(769) 00:13:28.312 fused_ordering(770) 00:13:28.312 fused_ordering(771) 00:13:28.312 fused_ordering(772) 00:13:28.312 fused_ordering(773) 00:13:28.312 fused_ordering(774) 00:13:28.312 fused_ordering(775) 00:13:28.312 fused_ordering(776) 00:13:28.312 fused_ordering(777) 00:13:28.312 fused_ordering(778) 00:13:28.312 fused_ordering(779) 00:13:28.312 fused_ordering(780) 00:13:28.312 fused_ordering(781) 00:13:28.312 fused_ordering(782) 00:13:28.312 fused_ordering(783) 00:13:28.312 fused_ordering(784) 00:13:28.312 fused_ordering(785) 00:13:28.312 fused_ordering(786) 00:13:28.312 fused_ordering(787) 00:13:28.312 fused_ordering(788) 00:13:28.312 fused_ordering(789) 00:13:28.312 fused_ordering(790) 00:13:28.312 fused_ordering(791) 00:13:28.312 fused_ordering(792) 00:13:28.312 fused_ordering(793) 00:13:28.312 fused_ordering(794) 00:13:28.312 fused_ordering(795) 00:13:28.312 fused_ordering(796) 00:13:28.312 fused_ordering(797) 00:13:28.312 fused_ordering(798) 00:13:28.312 fused_ordering(799) 00:13:28.312 fused_ordering(800) 00:13:28.312 fused_ordering(801) 00:13:28.312 fused_ordering(802) 00:13:28.312 fused_ordering(803) 00:13:28.312 fused_ordering(804) 00:13:28.312 fused_ordering(805) 00:13:28.312 fused_ordering(806) 00:13:28.312 fused_ordering(807) 00:13:28.312 fused_ordering(808) 00:13:28.312 fused_ordering(809) 00:13:28.312 fused_ordering(810) 00:13:28.312 fused_ordering(811) 00:13:28.312 fused_ordering(812) 00:13:28.312 fused_ordering(813) 00:13:28.312 fused_ordering(814) 00:13:28.312 fused_ordering(815) 00:13:28.312 fused_ordering(816) 00:13:28.312 fused_ordering(817) 00:13:28.312 fused_ordering(818) 00:13:28.312 fused_ordering(819) 00:13:28.312 fused_ordering(820) 00:13:28.570 fused_ordering(821) 00:13:28.570 fused_ordering(822) 00:13:28.570 fused_ordering(823) 00:13:28.570 fused_ordering(824) 00:13:28.570 fused_ordering(825) 00:13:28.570 fused_ordering(826) 00:13:28.571 fused_ordering(827) 00:13:28.571 fused_ordering(828) 00:13:28.571 fused_ordering(829) 00:13:28.571 fused_ordering(830) 00:13:28.571 fused_ordering(831) 00:13:28.571 fused_ordering(832) 00:13:28.571 fused_ordering(833) 00:13:28.571 fused_ordering(834) 00:13:28.571 fused_ordering(835) 00:13:28.571 fused_ordering(836) 00:13:28.571 fused_ordering(837) 00:13:28.571 fused_ordering(838) 00:13:28.571 fused_ordering(839) 00:13:28.571 fused_ordering(840) 00:13:28.571 fused_ordering(841) 00:13:28.571 fused_ordering(842) 00:13:28.571 fused_ordering(843) 00:13:28.571 fused_ordering(844) 00:13:28.571 fused_ordering(845) 00:13:28.571 fused_ordering(846) 00:13:28.571 fused_ordering(847) 00:13:28.571 fused_ordering(848) 00:13:28.571 fused_ordering(849) 00:13:28.571 fused_ordering(850) 00:13:28.571 fused_ordering(851) 00:13:28.571 fused_ordering(852) 00:13:28.571 fused_ordering(853) 00:13:28.571 fused_ordering(854) 00:13:28.571 fused_ordering(855) 00:13:28.571 fused_ordering(856) 00:13:28.571 fused_ordering(857) 00:13:28.571 fused_ordering(858) 00:13:28.571 fused_ordering(859) 00:13:28.571 fused_ordering(860) 00:13:28.571 fused_ordering(861) 00:13:28.571 fused_ordering(862) 00:13:28.571 fused_ordering(863) 00:13:28.571 fused_ordering(864) 00:13:28.571 fused_ordering(865) 00:13:28.571 fused_ordering(866) 00:13:28.571 fused_ordering(867) 00:13:28.571 fused_ordering(868) 00:13:28.571 fused_ordering(869) 00:13:28.571 fused_ordering(870) 00:13:28.571 fused_ordering(871) 00:13:28.571 fused_ordering(872) 00:13:28.571 fused_ordering(873) 00:13:28.571 fused_ordering(874) 00:13:28.571 fused_ordering(875) 00:13:28.571 fused_ordering(876) 00:13:28.571 fused_ordering(877) 00:13:28.571 fused_ordering(878) 00:13:28.571 fused_ordering(879) 00:13:28.571 fused_ordering(880) 00:13:28.571 fused_ordering(881) 00:13:28.571 fused_ordering(882) 00:13:28.571 fused_ordering(883) 00:13:28.571 fused_ordering(884) 00:13:28.571 fused_ordering(885) 00:13:28.571 fused_ordering(886) 00:13:28.571 fused_ordering(887) 00:13:28.571 fused_ordering(888) 00:13:28.571 fused_ordering(889) 00:13:28.571 fused_ordering(890) 00:13:28.571 fused_ordering(891) 00:13:28.571 fused_ordering(892) 00:13:28.571 fused_ordering(893) 00:13:28.571 fused_ordering(894) 00:13:28.571 fused_ordering(895) 00:13:28.571 fused_ordering(896) 00:13:28.571 fused_ordering(897) 00:13:28.571 fused_ordering(898) 00:13:28.571 fused_ordering(899) 00:13:28.571 fused_ordering(900) 00:13:28.571 fused_ordering(901) 00:13:28.571 fused_ordering(902) 00:13:28.571 fused_ordering(903) 00:13:28.571 fused_ordering(904) 00:13:28.571 fused_ordering(905) 00:13:28.571 fused_ordering(906) 00:13:28.571 fused_ordering(907) 00:13:28.571 fused_ordering(908) 00:13:28.571 fused_ordering(909) 00:13:28.571 fused_ordering(910) 00:13:28.571 fused_ordering(911) 00:13:28.571 fused_ordering(912) 00:13:28.571 fused_ordering(913) 00:13:28.571 fused_ordering(914) 00:13:28.571 fused_ordering(915) 00:13:28.571 fused_ordering(916) 00:13:28.571 fused_ordering(917) 00:13:28.571 fused_ordering(918) 00:13:28.571 fused_ordering(919) 00:13:28.571 fused_ordering(920) 00:13:28.571 fused_ordering(921) 00:13:28.571 fused_ordering(922) 00:13:28.571 fused_ordering(923) 00:13:28.571 fused_ordering(924) 00:13:28.571 fused_ordering(925) 00:13:28.571 fused_ordering(926) 00:13:28.571 fused_ordering(927) 00:13:28.571 fused_ordering(928) 00:13:28.571 fused_ordering(929) 00:13:28.571 fused_ordering(930) 00:13:28.571 fused_ordering(931) 00:13:28.571 fused_ordering(932) 00:13:28.571 fused_ordering(933) 00:13:28.571 fused_ordering(934) 00:13:28.571 fused_ordering(935) 00:13:28.571 fused_ordering(936) 00:13:28.571 fused_ordering(937) 00:13:28.571 fused_ordering(938) 00:13:28.571 fused_ordering(939) 00:13:28.571 fused_ordering(940) 00:13:28.571 fused_ordering(941) 00:13:28.571 fused_ordering(942) 00:13:28.571 fused_ordering(943) 00:13:28.571 fused_ordering(944) 00:13:28.571 fused_ordering(945) 00:13:28.571 fused_ordering(946) 00:13:28.571 fused_ordering(947) 00:13:28.571 fused_ordering(948) 00:13:28.571 fused_ordering(949) 00:13:28.571 fused_ordering(950) 00:13:28.571 fused_ordering(951) 00:13:28.571 fused_ordering(952) 00:13:28.571 fused_ordering(953) 00:13:28.571 fused_ordering(954) 00:13:28.571 fused_ordering(955) 00:13:28.571 fused_ordering(956) 00:13:28.571 fused_ordering(957) 00:13:28.571 fused_ordering(958) 00:13:28.571 fused_ordering(959) 00:13:28.571 fused_ordering(960) 00:13:28.571 fused_ordering(961) 00:13:28.571 fused_ordering(962) 00:13:28.571 fused_ordering(963) 00:13:28.571 fused_ordering(964) 00:13:28.571 fused_ordering(965) 00:13:28.571 fused_ordering(966) 00:13:28.571 fused_ordering(967) 00:13:28.571 fused_ordering(968) 00:13:28.571 fused_ordering(969) 00:13:28.571 fused_ordering(970) 00:13:28.571 fused_ordering(971) 00:13:28.571 fused_ordering(972) 00:13:28.571 fused_ordering(973) 00:13:28.571 fused_ordering(974) 00:13:28.571 fused_ordering(975) 00:13:28.571 fused_ordering(976) 00:13:28.571 fused_ordering(977) 00:13:28.571 fused_ordering(978) 00:13:28.571 fused_ordering(979) 00:13:28.571 fused_ordering(980) 00:13:28.571 fused_ordering(981) 00:13:28.571 fused_ordering(982) 00:13:28.571 fused_ordering(983) 00:13:28.571 fused_ordering(984) 00:13:28.571 fused_ordering(985) 00:13:28.571 fused_ordering(986) 00:13:28.571 fused_ordering(987) 00:13:28.571 fused_ordering(988) 00:13:28.571 fused_ordering(989) 00:13:28.571 fused_ordering(990) 00:13:28.571 fused_ordering(991) 00:13:28.571 fused_ordering(992) 00:13:28.571 fused_ordering(993) 00:13:28.571 fused_ordering(994) 00:13:28.571 fused_ordering(995) 00:13:28.571 fused_ordering(996) 00:13:28.571 fused_ordering(997) 00:13:28.571 fused_ordering(998) 00:13:28.571 fused_ordering(999) 00:13:28.571 fused_ordering(1000) 00:13:28.571 fused_ordering(1001) 00:13:28.571 fused_ordering(1002) 00:13:28.571 fused_ordering(1003) 00:13:28.571 fused_ordering(1004) 00:13:28.571 fused_ordering(1005) 00:13:28.571 fused_ordering(1006) 00:13:28.571 fused_ordering(1007) 00:13:28.571 fused_ordering(1008) 00:13:28.571 fused_ordering(1009) 00:13:28.571 fused_ordering(1010) 00:13:28.571 fused_ordering(1011) 00:13:28.571 fused_ordering(1012) 00:13:28.571 fused_ordering(1013) 00:13:28.571 fused_ordering(1014) 00:13:28.571 fused_ordering(1015) 00:13:28.571 fused_ordering(1016) 00:13:28.571 fused_ordering(1017) 00:13:28.571 fused_ordering(1018) 00:13:28.571 fused_ordering(1019) 00:13:28.571 fused_ordering(1020) 00:13:28.571 fused_ordering(1021) 00:13:28.571 fused_ordering(1022) 00:13:28.571 fused_ordering(1023) 00:13:28.571 06:43:42 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.571 06:43:42 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.571 06:43:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.571 06:43:42 -- nvmf/common.sh@116 -- # sync 00:13:28.571 06:43:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.571 06:43:42 -- nvmf/common.sh@119 -- # set +e 00:13:28.571 06:43:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.571 06:43:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.571 rmmod nvme_tcp 00:13:28.571 rmmod nvme_fabrics 00:13:28.830 rmmod nvme_keyring 00:13:28.830 06:43:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:28.830 06:43:42 -- nvmf/common.sh@123 -- # set -e 00:13:28.830 06:43:42 -- nvmf/common.sh@124 -- # return 0 00:13:28.830 06:43:42 -- nvmf/common.sh@477 -- # '[' -n 70408 ']' 00:13:28.830 06:43:42 -- nvmf/common.sh@478 -- # killprocess 70408 00:13:28.830 06:43:42 -- common/autotest_common.sh@936 -- # '[' -z 70408 ']' 00:13:28.830 06:43:42 -- common/autotest_common.sh@940 -- # kill -0 70408 00:13:28.830 06:43:42 -- common/autotest_common.sh@941 -- # uname 00:13:28.830 06:43:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.830 06:43:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70408 00:13:28.830 06:43:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:28.830 killing process with pid 70408 00:13:28.830 06:43:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:28.830 06:43:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70408' 00:13:28.830 06:43:42 -- common/autotest_common.sh@955 -- # kill 70408 00:13:28.830 06:43:42 -- common/autotest_common.sh@960 -- # wait 70408 00:13:29.088 06:43:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:29.088 06:43:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:29.088 06:43:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:29.088 06:43:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.088 06:43:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:29.088 06:43:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.088 06:43:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.088 06:43:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.088 06:43:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:29.088 00:13:29.088 real 0m3.901s 00:13:29.088 user 0m4.461s 00:13:29.088 sys 0m1.277s 00:13:29.088 06:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.088 ************************************ 00:13:29.088 END TEST nvmf_fused_ordering 00:13:29.088 ************************************ 00:13:29.088 06:43:42 -- common/autotest_common.sh@10 -- # set +x 00:13:29.088 06:43:43 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.088 06:43:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:29.088 06:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.088 06:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:29.089 ************************************ 00:13:29.089 START TEST nvmf_delete_subsystem 00:13:29.089 ************************************ 00:13:29.089 06:43:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.348 * Looking for test storage... 00:13:29.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:29.348 06:43:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:29.348 06:43:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:29.348 06:43:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:29.348 06:43:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:29.348 06:43:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:29.348 06:43:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:29.348 06:43:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:29.348 06:43:43 -- scripts/common.sh@335 -- # IFS=.-: 00:13:29.348 06:43:43 -- scripts/common.sh@335 -- # read -ra ver1 00:13:29.348 06:43:43 -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.348 06:43:43 -- scripts/common.sh@336 -- # read -ra ver2 00:13:29.348 06:43:43 -- scripts/common.sh@337 -- # local 'op=<' 00:13:29.348 06:43:43 -- scripts/common.sh@339 -- # ver1_l=2 00:13:29.348 06:43:43 -- scripts/common.sh@340 -- # ver2_l=1 00:13:29.348 06:43:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:29.348 06:43:43 -- scripts/common.sh@343 -- # case "$op" in 00:13:29.348 06:43:43 -- scripts/common.sh@344 -- # : 1 00:13:29.348 06:43:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:29.348 06:43:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.348 06:43:43 -- scripts/common.sh@364 -- # decimal 1 00:13:29.348 06:43:43 -- scripts/common.sh@352 -- # local d=1 00:13:29.348 06:43:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.348 06:43:43 -- scripts/common.sh@354 -- # echo 1 00:13:29.348 06:43:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:29.348 06:43:43 -- scripts/common.sh@365 -- # decimal 2 00:13:29.348 06:43:43 -- scripts/common.sh@352 -- # local d=2 00:13:29.348 06:43:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.348 06:43:43 -- scripts/common.sh@354 -- # echo 2 00:13:29.348 06:43:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:29.348 06:43:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:29.348 06:43:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:29.348 06:43:43 -- scripts/common.sh@367 -- # return 0 00:13:29.348 06:43:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.348 06:43:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:29.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.348 --rc genhtml_branch_coverage=1 00:13:29.348 --rc genhtml_function_coverage=1 00:13:29.348 --rc genhtml_legend=1 00:13:29.348 --rc geninfo_all_blocks=1 00:13:29.348 --rc geninfo_unexecuted_blocks=1 00:13:29.348 00:13:29.348 ' 00:13:29.348 06:43:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:29.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.348 --rc genhtml_branch_coverage=1 00:13:29.348 --rc genhtml_function_coverage=1 00:13:29.348 --rc genhtml_legend=1 00:13:29.348 --rc geninfo_all_blocks=1 00:13:29.348 --rc geninfo_unexecuted_blocks=1 00:13:29.348 00:13:29.348 ' 00:13:29.348 06:43:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:29.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.348 --rc genhtml_branch_coverage=1 00:13:29.348 --rc genhtml_function_coverage=1 00:13:29.348 --rc genhtml_legend=1 00:13:29.348 --rc geninfo_all_blocks=1 00:13:29.348 --rc geninfo_unexecuted_blocks=1 00:13:29.348 00:13:29.348 ' 00:13:29.348 06:43:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:29.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.348 --rc genhtml_branch_coverage=1 00:13:29.348 --rc genhtml_function_coverage=1 00:13:29.348 --rc genhtml_legend=1 00:13:29.348 --rc geninfo_all_blocks=1 00:13:29.348 --rc geninfo_unexecuted_blocks=1 00:13:29.348 00:13:29.348 ' 00:13:29.348 06:43:43 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.348 06:43:43 -- nvmf/common.sh@7 -- # uname -s 00:13:29.348 06:43:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.348 06:43:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.348 06:43:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.348 06:43:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.348 06:43:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.348 06:43:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.348 06:43:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.348 06:43:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.348 06:43:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.348 06:43:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:29.348 06:43:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:29.348 06:43:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.348 06:43:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.348 06:43:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.348 06:43:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.348 06:43:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.348 06:43:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.348 06:43:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.348 06:43:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.348 06:43:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.348 06:43:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.348 06:43:43 -- paths/export.sh@5 -- # export PATH 00:13:29.348 06:43:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.348 06:43:43 -- nvmf/common.sh@46 -- # : 0 00:13:29.348 06:43:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.348 06:43:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.348 06:43:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.348 06:43:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.348 06:43:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.348 06:43:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.348 06:43:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.348 06:43:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.348 06:43:43 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:29.348 06:43:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.348 06:43:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.348 06:43:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.348 06:43:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.348 06:43:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.348 06:43:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.348 06:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.348 06:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.348 06:43:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:29.348 06:43:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:29.348 06:43:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.348 06:43:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.348 06:43:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.348 06:43:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:29.348 06:43:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.348 06:43:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.348 06:43:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.348 06:43:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.348 06:43:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.348 06:43:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.348 06:43:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.348 06:43:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.348 06:43:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:29.348 06:43:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:29.348 Cannot find device "nvmf_tgt_br" 00:13:29.348 06:43:43 -- nvmf/common.sh@154 -- # true 00:13:29.348 06:43:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.348 Cannot find device "nvmf_tgt_br2" 00:13:29.348 06:43:43 -- nvmf/common.sh@155 -- # true 00:13:29.348 06:43:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:29.349 06:43:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:29.349 Cannot find device "nvmf_tgt_br" 00:13:29.349 06:43:43 -- nvmf/common.sh@157 -- # true 00:13:29.349 06:43:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:29.349 Cannot find device "nvmf_tgt_br2" 00:13:29.349 06:43:43 -- nvmf/common.sh@158 -- # true 00:13:29.349 06:43:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:29.608 06:43:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:29.608 06:43:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.608 06:43:43 -- nvmf/common.sh@161 -- # true 00:13:29.608 06:43:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.608 06:43:43 -- nvmf/common.sh@162 -- # true 00:13:29.608 06:43:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.608 06:43:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.608 06:43:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.608 06:43:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.608 06:43:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.608 06:43:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.608 06:43:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.608 06:43:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.608 06:43:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.608 06:43:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:29.608 06:43:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:29.608 06:43:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:29.608 06:43:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:29.608 06:43:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.608 06:43:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.608 06:43:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.608 06:43:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:29.608 06:43:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:29.608 06:43:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.608 06:43:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.608 06:43:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.608 06:43:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.608 06:43:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.608 06:43:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:29.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:13:29.608 00:13:29.608 --- 10.0.0.2 ping statistics --- 00:13:29.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.608 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:29.608 06:43:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:29.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:29.608 00:13:29.608 --- 10.0.0.3 ping statistics --- 00:13:29.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.608 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:29.608 06:43:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:29.608 00:13:29.608 --- 10.0.0.1 ping statistics --- 00:13:29.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.608 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:29.608 06:43:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.608 06:43:43 -- nvmf/common.sh@421 -- # return 0 00:13:29.608 06:43:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:29.608 06:43:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.608 06:43:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:29.608 06:43:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:29.608 06:43:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.608 06:43:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:29.608 06:43:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:29.608 06:43:43 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:29.608 06:43:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:29.608 06:43:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.608 06:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:29.608 06:43:43 -- nvmf/common.sh@469 -- # nvmfpid=70648 00:13:29.608 06:43:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:29.608 06:43:43 -- nvmf/common.sh@470 -- # waitforlisten 70648 00:13:29.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.608 06:43:43 -- common/autotest_common.sh@829 -- # '[' -z 70648 ']' 00:13:29.608 06:43:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.608 06:43:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.608 06:43:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.608 06:43:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.608 06:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:29.868 [2024-12-14 06:43:43.627673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.868 [2024-12-14 06:43:43.627756] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.868 [2024-12-14 06:43:43.763673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.126 [2024-12-14 06:43:43.885863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:30.126 [2024-12-14 06:43:43.886062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.126 [2024-12-14 06:43:43.886080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.126 [2024-12-14 06:43:43.886092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.126 [2024-12-14 06:43:43.886414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.126 [2024-12-14 06:43:43.886611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.694 06:43:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.694 06:43:44 -- common/autotest_common.sh@862 -- # return 0 00:13:30.694 06:43:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:30.694 06:43:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.694 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.952 06:43:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.952 06:43:44 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.952 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.952 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.952 [2024-12-14 06:43:44.710926] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.952 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.952 06:43:44 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.952 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.952 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.952 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.952 06:43:44 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.952 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.952 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.952 [2024-12-14 06:43:44.727108] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.952 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.952 06:43:44 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.952 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.952 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.952 NULL1 00:13:30.952 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.952 06:43:44 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.952 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.953 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 Delay0 00:13:30.953 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.953 06:43:44 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.953 06:43:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.953 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 06:43:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.953 06:43:44 -- target/delete_subsystem.sh@28 -- # perf_pid=70700 00:13:30.953 06:43:44 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:30.953 06:43:44 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:31.211 [2024-12-14 06:43:44.941585] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:33.118 06:43:46 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.118 06:43:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.118 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Write completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 Read completed with error (sct=0, sc=8) 00:13:33.118 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 starting I/O failed: -6 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.119 starting I/O failed: -6 00:13:33.119 Read completed with error (sct=0, sc=8) 00:13:33.119 Write completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 Write completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Write completed with error (sct=0, sc=8) 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Write completed with error (sct=0, sc=8) 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 starting I/O failed: -6 00:13:33.120 Read completed with error (sct=0, sc=8) 00:13:33.120 [2024-12-14 06:43:46.985050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a4800c1d0 is same with the state(5) to be set 00:13:34.069 [2024-12-14 06:43:47.960629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24295a0 is same with the state(5) to be set 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 [2024-12-14 06:43:47.979567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a4800bf20 is same with the state(5) to be set 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 [2024-12-14 06:43:47.980453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a4800c480 is same with the state(5) to be set 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 [2024-12-14 06:43:47.981737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24277d0 is same with the state(5) to be set 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Write completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 Read completed with error (sct=0, sc=8) 00:13:34.069 [2024-12-14 06:43:47.982161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2427d30 is same with the state(5) to be set 00:13:34.069 [2024-12-14 06:43:47.983249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24295a0 (9): Bad file descriptor 00:13:34.069 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:34.069 06:43:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.069 06:43:47 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:34.069 06:43:47 -- target/delete_subsystem.sh@35 -- # kill -0 70700 00:13:34.069 06:43:47 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:34.069 Initializing NVMe Controllers 00:13:34.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.069 Controller IO queue size 128, less than required. 00:13:34.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:34.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:34.069 Initialization complete. Launching workers. 00:13:34.069 ======================================================== 00:13:34.069 Latency(us) 00:13:34.070 Device Information : IOPS MiB/s Average min max 00:13:34.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.04 0.09 893409.46 611.48 1013307.07 00:13:34.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 188.05 0.09 900040.61 1545.76 1015285.31 00:13:34.070 ======================================================== 00:13:34.070 Total : 378.09 0.18 896707.63 611.48 1015285.31 00:13:34.070 00:13:34.636 06:43:48 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:34.636 06:43:48 -- target/delete_subsystem.sh@35 -- # kill -0 70700 00:13:34.636 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70700) - No such process 00:13:34.636 06:43:48 -- target/delete_subsystem.sh@45 -- # NOT wait 70700 00:13:34.636 06:43:48 -- common/autotest_common.sh@650 -- # local es=0 00:13:34.636 06:43:48 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70700 00:13:34.636 06:43:48 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:34.636 06:43:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.636 06:43:48 -- common/autotest_common.sh@642 -- # type -t wait 00:13:34.636 06:43:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.636 06:43:48 -- common/autotest_common.sh@653 -- # wait 70700 00:13:34.637 06:43:48 -- common/autotest_common.sh@653 -- # es=1 00:13:34.637 06:43:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.637 06:43:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.637 06:43:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.637 06:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.637 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.637 06:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.637 06:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.637 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.637 [2024-12-14 06:43:48.508494] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.637 06:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.637 06:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.637 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.637 06:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@54 -- # perf_pid=70749 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:34.637 06:43:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.895 [2024-12-14 06:43:48.685847] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:35.153 06:43:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.153 06:43:49 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:35.153 06:43:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.720 06:43:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.720 06:43:49 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:35.720 06:43:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.286 06:43:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.286 06:43:50 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:36.286 06:43:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.852 06:43:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.852 06:43:50 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:36.852 06:43:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.110 06:43:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.110 06:43:51 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:37.110 06:43:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.675 06:43:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.675 06:43:51 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:37.675 06:43:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.934 Initializing NVMe Controllers 00:13:37.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.934 Controller IO queue size 128, less than required. 00:13:37.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:37.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:37.934 Initialization complete. Launching workers. 00:13:37.934 ======================================================== 00:13:37.934 Latency(us) 00:13:37.934 Device Information : IOPS MiB/s Average min max 00:13:37.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003210.98 1000172.89 1010953.89 00:13:37.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005907.46 1000191.53 1041967.26 00:13:37.934 ======================================================== 00:13:37.934 Total : 256.00 0.12 1004559.22 1000172.89 1041967.26 00:13:37.934 00:13:38.192 06:43:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.192 06:43:52 -- target/delete_subsystem.sh@57 -- # kill -0 70749 00:13:38.192 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70749) - No such process 00:13:38.192 06:43:52 -- target/delete_subsystem.sh@67 -- # wait 70749 00:13:38.192 06:43:52 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:38.192 06:43:52 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:38.192 06:43:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:38.192 06:43:52 -- nvmf/common.sh@116 -- # sync 00:13:38.192 06:43:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:38.192 06:43:52 -- nvmf/common.sh@119 -- # set +e 00:13:38.192 06:43:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:38.192 06:43:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:38.192 rmmod nvme_tcp 00:13:38.192 rmmod nvme_fabrics 00:13:38.192 rmmod nvme_keyring 00:13:38.192 06:43:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:38.192 06:43:52 -- nvmf/common.sh@123 -- # set -e 00:13:38.192 06:43:52 -- nvmf/common.sh@124 -- # return 0 00:13:38.192 06:43:52 -- nvmf/common.sh@477 -- # '[' -n 70648 ']' 00:13:38.192 06:43:52 -- nvmf/common.sh@478 -- # killprocess 70648 00:13:38.192 06:43:52 -- common/autotest_common.sh@936 -- # '[' -z 70648 ']' 00:13:38.192 06:43:52 -- common/autotest_common.sh@940 -- # kill -0 70648 00:13:38.450 06:43:52 -- common/autotest_common.sh@941 -- # uname 00:13:38.450 06:43:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:38.450 06:43:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70648 00:13:38.450 killing process with pid 70648 00:13:38.450 06:43:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:38.450 06:43:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:38.451 06:43:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70648' 00:13:38.451 06:43:52 -- common/autotest_common.sh@955 -- # kill 70648 00:13:38.451 06:43:52 -- common/autotest_common.sh@960 -- # wait 70648 00:13:38.709 06:43:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:38.709 06:43:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:38.709 06:43:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:38.709 06:43:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.709 06:43:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:38.709 06:43:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.709 06:43:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.709 06:43:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.709 06:43:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:38.709 ************************************ 00:13:38.709 END TEST nvmf_delete_subsystem 00:13:38.709 ************************************ 00:13:38.709 00:13:38.709 real 0m9.589s 00:13:38.709 user 0m29.140s 00:13:38.709 sys 0m1.650s 00:13:38.709 06:43:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:38.709 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:38.709 06:43:52 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:38.709 06:43:52 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:13:38.709 06:43:52 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:38.709 06:43:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:38.709 06:43:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.709 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:38.709 ************************************ 00:13:38.709 START TEST nvmf_vfio_user 00:13:38.709 ************************************ 00:13:38.709 06:43:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:38.968 * Looking for test storage... 00:13:38.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.968 06:43:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:38.968 06:43:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:38.968 06:43:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:38.968 06:43:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:38.968 06:43:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:38.968 06:43:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:38.968 06:43:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:38.968 06:43:52 -- scripts/common.sh@335 -- # IFS=.-: 00:13:38.968 06:43:52 -- scripts/common.sh@335 -- # read -ra ver1 00:13:38.968 06:43:52 -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.968 06:43:52 -- scripts/common.sh@336 -- # read -ra ver2 00:13:38.968 06:43:52 -- scripts/common.sh@337 -- # local 'op=<' 00:13:38.968 06:43:52 -- scripts/common.sh@339 -- # ver1_l=2 00:13:38.968 06:43:52 -- scripts/common.sh@340 -- # ver2_l=1 00:13:38.968 06:43:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:38.968 06:43:52 -- scripts/common.sh@343 -- # case "$op" in 00:13:38.968 06:43:52 -- scripts/common.sh@344 -- # : 1 00:13:38.968 06:43:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:38.968 06:43:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.968 06:43:52 -- scripts/common.sh@364 -- # decimal 1 00:13:38.968 06:43:52 -- scripts/common.sh@352 -- # local d=1 00:13:38.968 06:43:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.968 06:43:52 -- scripts/common.sh@354 -- # echo 1 00:13:38.968 06:43:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:38.968 06:43:52 -- scripts/common.sh@365 -- # decimal 2 00:13:38.968 06:43:52 -- scripts/common.sh@352 -- # local d=2 00:13:38.968 06:43:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.968 06:43:52 -- scripts/common.sh@354 -- # echo 2 00:13:38.968 06:43:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:38.968 06:43:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:38.968 06:43:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:38.968 06:43:52 -- scripts/common.sh@367 -- # return 0 00:13:38.968 06:43:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.968 06:43:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:38.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.968 --rc genhtml_branch_coverage=1 00:13:38.968 --rc genhtml_function_coverage=1 00:13:38.968 --rc genhtml_legend=1 00:13:38.968 --rc geninfo_all_blocks=1 00:13:38.968 --rc geninfo_unexecuted_blocks=1 00:13:38.968 00:13:38.968 ' 00:13:38.968 06:43:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:38.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.968 --rc genhtml_branch_coverage=1 00:13:38.968 --rc genhtml_function_coverage=1 00:13:38.968 --rc genhtml_legend=1 00:13:38.968 --rc geninfo_all_blocks=1 00:13:38.968 --rc geninfo_unexecuted_blocks=1 00:13:38.968 00:13:38.968 ' 00:13:38.968 06:43:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:38.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.968 --rc genhtml_branch_coverage=1 00:13:38.968 --rc genhtml_function_coverage=1 00:13:38.968 --rc genhtml_legend=1 00:13:38.968 --rc geninfo_all_blocks=1 00:13:38.968 --rc geninfo_unexecuted_blocks=1 00:13:38.968 00:13:38.968 ' 00:13:38.968 06:43:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:38.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.968 --rc genhtml_branch_coverage=1 00:13:38.968 --rc genhtml_function_coverage=1 00:13:38.968 --rc genhtml_legend=1 00:13:38.968 --rc geninfo_all_blocks=1 00:13:38.968 --rc geninfo_unexecuted_blocks=1 00:13:38.968 00:13:38.968 ' 00:13:38.968 06:43:52 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.968 06:43:52 -- nvmf/common.sh@7 -- # uname -s 00:13:38.968 06:43:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.968 06:43:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.968 06:43:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.968 06:43:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.968 06:43:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.968 06:43:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.968 06:43:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.968 06:43:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.968 06:43:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.968 06:43:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.968 06:43:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:38.968 06:43:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:13:38.968 06:43:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.968 06:43:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.968 06:43:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.968 06:43:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.968 06:43:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.968 06:43:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.968 06:43:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.968 06:43:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.969 06:43:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.969 06:43:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.969 06:43:52 -- paths/export.sh@5 -- # export PATH 00:13:38.969 06:43:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.969 06:43:52 -- nvmf/common.sh@46 -- # : 0 00:13:38.969 06:43:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.969 06:43:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.969 06:43:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.969 06:43:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.969 06:43:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.969 06:43:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.969 06:43:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.969 06:43:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70880 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:38.969 Process pid: 70880 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70880' 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:38.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.969 06:43:52 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70880 00:13:38.969 06:43:52 -- common/autotest_common.sh@829 -- # '[' -z 70880 ']' 00:13:38.969 06:43:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.969 06:43:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.969 06:43:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.969 06:43:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.969 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:39.227 [2024-12-14 06:43:52.970785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.227 [2024-12-14 06:43:52.971221] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.227 [2024-12-14 06:43:53.112741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.486 [2024-12-14 06:43:53.254437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.486 [2024-12-14 06:43:53.254915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.486 [2024-12-14 06:43:53.254945] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.486 [2024-12-14 06:43:53.254954] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.486 [2024-12-14 06:43:53.255102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.486 [2024-12-14 06:43:53.255225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.486 [2024-12-14 06:43:53.255724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.486 [2024-12-14 06:43:53.255757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.052 06:43:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.052 06:43:53 -- common/autotest_common.sh@862 -- # return 0 00:13:40.052 06:43:53 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:40.988 06:43:54 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:41.555 06:43:55 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:41.555 06:43:55 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:41.555 06:43:55 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.555 06:43:55 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:41.555 06:43:55 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.814 Malloc1 00:13:41.814 06:43:55 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:42.072 06:43:55 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:42.331 06:43:56 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:42.331 06:43:56 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.331 06:43:56 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:42.331 06:43:56 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.898 Malloc2 00:13:42.898 06:43:56 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:42.898 06:43:56 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:43.156 06:43:57 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:43.414 06:43:57 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:43.674 [2024-12-14 06:43:57.410447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:43.674 [2024-12-14 06:43:57.410503] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71023 ] 00:13:43.674 [2024-12-14 06:43:57.550393] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:43.674 [2024-12-14 06:43:57.559431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:43.674 [2024-12-14 06:43:57.559461] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f43b56ed000 00:13:43.674 [2024-12-14 06:43:57.560424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.561428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.562430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.563427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.564430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.565435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.566441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.567447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:43.674 [2024-12-14 06:43:57.568465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:43.674 [2024-12-14 06:43:57.568519] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f43b4d13000 00:13:43.674 [2024-12-14 06:43:57.569937] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:43.674 [2024-12-14 06:43:57.590414] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:43.674 [2024-12-14 06:43:57.590460] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:43.674 [2024-12-14 06:43:57.595539] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:43.674 [2024-12-14 06:43:57.595598] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:43.674 [2024-12-14 06:43:57.595722] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:43.674 [2024-12-14 06:43:57.595764] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:43.674 [2024-12-14 06:43:57.595774] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:43.674 [2024-12-14 06:43:57.596523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:43.674 [2024-12-14 06:43:57.596545] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:43.674 [2024-12-14 06:43:57.596557] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:43.674 [2024-12-14 06:43:57.597529] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:43.674 [2024-12-14 06:43:57.597551] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:43.674 [2024-12-14 06:43:57.597563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.598532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:43.674 [2024-12-14 06:43:57.598555] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.599534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:43.674 [2024-12-14 06:43:57.599553] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:43.674 [2024-12-14 06:43:57.599561] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.599570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.599676] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:43.674 [2024-12-14 06:43:57.599682] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.599688] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:43.674 [2024-12-14 06:43:57.600543] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:43.674 [2024-12-14 06:43:57.601541] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:43.674 [2024-12-14 06:43:57.602543] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:43.674 [2024-12-14 06:43:57.603600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:43.674 [2024-12-14 06:43:57.604552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:43.674 [2024-12-14 06:43:57.604572] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:43.674 [2024-12-14 06:43:57.604579] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:43.674 [2024-12-14 06:43:57.604600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:43.674 [2024-12-14 06:43:57.604618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:43.674 [2024-12-14 06:43:57.604645] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.675 [2024-12-14 06:43:57.604672] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.675 [2024-12-14 06:43:57.604697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.604790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.604807] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:43.675 [2024-12-14 06:43:57.604813] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:43.675 [2024-12-14 06:43:57.604817] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:43.675 [2024-12-14 06:43:57.604823] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:43.675 [2024-12-14 06:43:57.604828] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:43.675 [2024-12-14 06:43:57.604834] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:43.675 [2024-12-14 06:43:57.604840] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.604855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.604869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.604888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.604904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.675 [2024-12-14 06:43:57.604914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.675 [2024-12-14 06:43:57.604923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.675 [2024-12-14 06:43:57.604932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.675 [2024-12-14 06:43:57.604937] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.604967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.604980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.604991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.604999] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:43.675 [2024-12-14 06:43:57.605005] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605134] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605143] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:43.675 [2024-12-14 06:43:57.605148] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:43.675 [2024-12-14 06:43:57.605156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605205] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:43.675 [2024-12-14 06:43:57.605221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605233] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605241] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.675 [2024-12-14 06:43:57.605247] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.675 [2024-12-14 06:43:57.605254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605313] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605333] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:43.675 [2024-12-14 06:43:57.605338] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.675 [2024-12-14 06:43:57.605345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605374] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605385] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605393] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605404] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:43.675 [2024-12-14 06:43:57.605410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:43.675 [2024-12-14 06:43:57.605415] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:43.675 [2024-12-14 06:43:57.605449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:43.675 [2024-12-14 06:43:57.605556] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:43.675 [2024-12-14 06:43:57.605562] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:43.675 [2024-12-14 06:43:57.605566] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:43.675 [2024-12-14 06:43:57.605569] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:43.675 [2024-12-14 06:43:57.605576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:43.675 [2024-12-14 06:43:57.605584] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:43.675 [2024-12-14 06:43:57.605589] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:43.675 [2024-12-14 06:43:57.605596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605604] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:43.675 [2024-12-14 06:43:57.605609] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:43.675 [2024-12-14 06:43:57.605615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:43.675 [2024-12-14 06:43:57.605623] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:43.675 ===================================================== 00:13:43.675 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.675 ===================================================== 00:13:43.675 Controller Capabilities/Features 00:13:43.675 ================================ 00:13:43.675 Vendor ID: 4e58 00:13:43.675 Subsystem Vendor ID: 4e58 00:13:43.675 Serial Number: SPDK1 00:13:43.675 Model Number: SPDK bdev Controller 00:13:43.675 Firmware Version: 24.01.1 00:13:43.675 Recommended Arb Burst: 6 00:13:43.675 IEEE OUI Identifier: 8d 6b 50 00:13:43.675 Multi-path I/O 00:13:43.675 May have multiple subsystem ports: Yes 00:13:43.675 May have multiple controllers: Yes 00:13:43.675 Associated with SR-IOV VF: No 00:13:43.675 Max Data Transfer Size: 131072 00:13:43.675 Max Number of Namespaces: 32 00:13:43.675 Max Number of I/O Queues: 127 00:13:43.675 NVMe Specification Version (VS): 1.3 00:13:43.676 NVMe Specification Version (Identify): 1.3 00:13:43.676 Maximum Queue Entries: 256 00:13:43.676 Contiguous Queues Required: Yes 00:13:43.676 Arbitration Mechanisms Supported 00:13:43.676 Weighted Round Robin: Not Supported 00:13:43.676 Vendor Specific: Not Supported 00:13:43.676 Reset Timeout: 15000 ms 00:13:43.676 Doorbell Stride: 4 bytes 00:13:43.676 NVM Subsystem Reset: Not Supported 00:13:43.676 Command Sets Supported 00:13:43.676 NVM Command Set: Supported 00:13:43.676 Boot Partition: Not Supported 00:13:43.676 Memory Page Size Minimum: 4096 bytes 00:13:43.676 Memory Page Size Maximum: 4096 bytes 00:13:43.676 Persistent Memory Region: Not Supported 00:13:43.676 Optional Asynchronous Events Supported 00:13:43.676 Namespace Attribute Notices: Supported 00:13:43.676 Firmware Activation Notices: Not Supported 00:13:43.676 ANA Change Notices: Not Supported 00:13:43.676 PLE Aggregate Log Change Notices: Not Supported 00:13:43.676 LBA Status Info Alert Notices: Not Supported 00:13:43.676 EGE Aggregate Log Change Notices: Not Supported 00:13:43.676 Normal NVM Subsystem Shutdown event: Not Supported 00:13:43.676 Zone Descriptor Change Notices: Not Supported 00:13:43.676 Discovery Log Change Notices: Not Supported 00:13:43.676 Controller Attributes 00:13:43.676 128-bit Host Identifier: Supported 00:13:43.676 Non-Operational Permissive Mode: Not Supported 00:13:43.676 NVM Sets: Not Supported 00:13:43.676 Read Recovery Levels: Not Supported 00:13:43.676 Endurance Groups: Not Supported 00:13:43.676 Predictable Latency Mode: Not Supported 00:13:43.676 Traffic Based Keep ALive: Not Supported 00:13:43.676 Namespace Granularity: Not Supported 00:13:43.676 SQ Associations: Not Supported 00:13:43.676 UUID List: Not Supported 00:13:43.676 Multi-Domain Subsystem: Not Supported 00:13:43.676 Fixed Capacity Management: Not Supported 00:13:43.676 Variable Capacity Management: Not Supported 00:13:43.676 Delete Endurance Group: Not Supported 00:13:43.676 Delete NVM Set: Not Supported 00:13:43.676 Extended LBA Formats Supported: Not Supported 00:13:43.676 Flexible Data Placement Supported: Not Supported 00:13:43.676 00:13:43.676 Controller Memory Buffer Support 00:13:43.676 ================================ 00:13:43.676 Supported: No 00:13:43.676 00:13:43.676 Persistent Memory Region Support 00:13:43.676 ================================ 00:13:43.676 Supported: No 00:13:43.676 00:13:43.676 Admin Command Set Attributes 00:13:43.676 ============================ 00:13:43.676 Security Send/Receive: Not Supported 00:13:43.676 Format NVM: Not Supported 00:13:43.676 Firmware Activate/Download: Not Supported 00:13:43.676 Namespace Management: Not Supported 00:13:43.676 Device Self-Test: Not Supported 00:13:43.676 Directives: Not Supported 00:13:43.676 NVMe-MI: Not Supported 00:13:43.676 Virtualization Management: Not Supported 00:13:43.676 Doorbell Buffer Config: Not Supported 00:13:43.676 Get LBA Status Capability: Not Supported 00:13:43.676 Command & Feature Lockdown Capability: Not Supported 00:13:43.676 Abort Command Limit: 4 00:13:43.676 Async Event Request Limit: 4 00:13:43.676 Number of Firmware Slots: N/A 00:13:43.676 Firmware Slot 1 Read-Only: N/A 00:13:43.676 Firmware Activation Wit[2024-12-14 06:43:57.605628] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:43.676 [2024-12-14 06:43:57.605635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:43.676 [2024-12-14 06:43:57.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:43.676 hout Reset: N/A 00:13:43.676 Multiple Update Detection Support: N/A 00:13:43.676 Firmware Update Granularity: No Information Provided 00:13:43.676 Per-Namespace SMART Log: No 00:13:43.676 Asymmetric Namespace Access Log Page: Not Supported 00:13:43.676 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:43.676 Command Effects Log Page: Supported 00:13:43.676 Get Log Page Extended Data: Supported 00:13:43.676 Telemetry Log Pages: Not Supported 00:13:43.676 Persistent Event Log Pages: Not Supported 00:13:43.676 Supported Log Pages Log Page: May Support 00:13:43.676 Commands Supported & Effects Log Page: Not Supported 00:13:43.676 Feature Identifiers & Effects Log Page:May Support 00:13:43.676 NVMe-MI Commands & Effects Log Page: May Support 00:13:43.676 Data Area 4 for Telemetry Log: Not Supported 00:13:43.676 Error Log Page Entries Supported: 128 00:13:43.676 Keep Alive: Supported 00:13:43.676 Keep Alive Granularity: 10000 ms 00:13:43.676 00:13:43.676 NVM Command Set Attributes 00:13:43.676 ========================== 00:13:43.676 Submission Queue Entry Size 00:13:43.676 Max: 64 00:13:43.676 Min: 64 00:13:43.676 Completion Queue Entry Size 00:13:43.676 Max: 16 00:13:43.676 Min: 16 00:13:43.676 Number of Namespaces: 32 00:13:43.676 Compare Command: Supported 00:13:43.676 Write Uncorrectable Command: Not Supported 00:13:43.676 Dataset Management Command: Supported 00:13:43.676 Write Zeroes Command: Supported 00:13:43.676 Set Features Save Field: Not Supported 00:13:43.676 Reservations: Not Supported 00:13:43.676 Timestamp: Not Supported 00:13:43.676 Copy: Supported 00:13:43.676 Volatile Write Cache: Present 00:13:43.676 Atomic Write Unit (Normal): 1 00:13:43.676 Atomic Write Unit (PFail): 1 00:13:43.676 Atomic Compare & Write Unit: 1 00:13:43.676 Fused Compare & Write: Supported 00:13:43.676 Scatter-Gather List 00:13:43.676 SGL Command Set: Supported (Dword aligned) 00:13:43.676 SGL Keyed: Not Supported 00:13:43.676 SGL Bit Bucket Descriptor: Not Supported 00:13:43.676 SGL Metadata Pointer: Not Supported 00:13:43.676 Oversized SGL: Not Supported 00:13:43.676 SGL Metadata Address: Not Supported 00:13:43.676 SGL Offset: Not Supported 00:13:43.676 Transport SGL Data Block: Not Supported 00:13:43.676 Replay Protected Memory Block: Not Supported 00:13:43.676 00:13:43.676 Firmware Slot Information 00:13:43.676 ========================= 00:13:43.676 Active slot: 1 00:13:43.676 Slot 1 Firmware Revision: 24.01.1 00:13:43.676 00:13:43.676 00:13:43.676 Commands Supported and Effects 00:13:43.676 ============================== 00:13:43.676 Admin Commands 00:13:43.676 -------------- 00:13:43.676 Get Log Page (02h): Supported 00:13:43.676 Identify (06h): Supported 00:13:43.676 Abort (08h): Supported 00:13:43.676 Set Features (09h): Supported 00:13:43.676 Get Features (0Ah): Supported 00:13:43.676 Asynchronous Event Request (0Ch): Supported 00:13:43.676 Keep Alive (18h): Supported 00:13:43.676 I/O Commands 00:13:43.676 ------------ 00:13:43.676 Flush (00h): Supported LBA-Change 00:13:43.676 Write (01h): Supported LBA-Change 00:13:43.676 Read (02h): Supported 00:13:43.676 Compare (05h): Supported 00:13:43.676 Write Zeroes (08h): Supported LBA-Change 00:13:43.676 Dataset Management (09h): Supported LBA-Change 00:13:43.676 Copy (19h): Supported LBA-Change 00:13:43.676 Unknown (79h): Supported LBA-Change 00:13:43.676 Unknown (7Ah): Supported 00:13:43.676 00:13:43.676 Error Log 00:13:43.676 ========= 00:13:43.676 00:13:43.676 Arbitration 00:13:43.676 =========== 00:13:43.676 Arbitration Burst: 1 00:13:43.676 00:13:43.676 Power Management 00:13:43.676 ================ 00:13:43.676 Number of Power States: 1 00:13:43.676 Current Power State: Power State #0 00:13:43.676 Power State #0: 00:13:43.676 Max Power: 0.00 W 00:13:43.676 Non-Operational State: Operational 00:13:43.676 Entry Latency: Not Reported 00:13:43.676 Exit Latency: Not Reported 00:13:43.676 Relative Read Throughput: 0 00:13:43.676 Relative Read Latency: 0 00:13:43.676 Relative Write Throughput: 0 00:13:43.676 Relative Write Latency: 0 00:13:43.676 Idle Power: Not Reported 00:13:43.676 Active Power: Not Reported 00:13:43.676 Non-Operational Permissive Mode: Not Supported 00:13:43.676 00:13:43.676 Health Information 00:13:43.676 ================== 00:13:43.676 Critical Warnings: 00:13:43.676 Available Spare Space: OK 00:13:43.676 Temperature: OK 00:13:43.676 Device Reliability: OK 00:13:43.676 Read Only: No 00:13:43.676 Volatile Memory Backup: OK 00:13:43.676 Current Temperature: 0 Kelvin[2024-12-14 06:43:57.605840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:43.676 [2024-12-14 06:43:57.605855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605897] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:43.676 [2024-12-14 06:43:57.605910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.676 [2024-12-14 06:43:57.605925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.677 [2024-12-14 06:43:57.605932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.677 [2024-12-14 06:43:57.606559] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:43.677 [2024-12-14 06:43:57.606583] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:43.677 [2024-12-14 06:43:57.607603] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:43.677 [2024-12-14 06:43:57.607613] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:43.677 [2024-12-14 06:43:57.608578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:43.677 [2024-12-14 06:43:57.608600] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:43.677 [2024-12-14 06:43:57.608657] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:43.677 [2024-12-14 06:43:57.613957] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:43.677 (-273 Celsius) 00:13:43.677 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:43.677 Available Spare: 0% 00:13:43.677 Available Spare Threshold: 0% 00:13:43.677 Life Percentage Used: 0% 00:13:43.677 Data Units Read: 0 00:13:43.677 Data Units Written: 0 00:13:43.677 Host Read Commands: 0 00:13:43.677 Host Write Commands: 0 00:13:43.677 Controller Busy Time: 0 minutes 00:13:43.677 Power Cycles: 0 00:13:43.677 Power On Hours: 0 hours 00:13:43.677 Unsafe Shutdowns: 0 00:13:43.677 Unrecoverable Media Errors: 0 00:13:43.677 Lifetime Error Log Entries: 0 00:13:43.677 Warning Temperature Time: 0 minutes 00:13:43.677 Critical Temperature Time: 0 minutes 00:13:43.677 00:13:43.677 Number of Queues 00:13:43.677 ================ 00:13:43.677 Number of I/O Submission Queues: 127 00:13:43.677 Number of I/O Completion Queues: 127 00:13:43.677 00:13:43.677 Active Namespaces 00:13:43.677 ================= 00:13:43.677 Namespace ID:1 00:13:43.677 Error Recovery Timeout: Unlimited 00:13:43.677 Command Set Identifier: NVM (00h) 00:13:43.677 Deallocate: Supported 00:13:43.677 Deallocated/Unwritten Error: Not Supported 00:13:43.677 Deallocated Read Value: Unknown 00:13:43.677 Deallocate in Write Zeroes: Not Supported 00:13:43.677 Deallocated Guard Field: 0xFFFF 00:13:43.677 Flush: Supported 00:13:43.677 Reservation: Supported 00:13:43.677 Namespace Sharing Capabilities: Multiple Controllers 00:13:43.677 Size (in LBAs): 131072 (0GiB) 00:13:43.677 Capacity (in LBAs): 131072 (0GiB) 00:13:43.677 Utilization (in LBAs): 131072 (0GiB) 00:13:43.677 NGUID: B401C410C84D4F1D96C38F76FDB452A6 00:13:43.677 UUID: b401c410-c84d-4f1d-96c3-8f76fdb452a6 00:13:43.677 Thin Provisioning: Not Supported 00:13:43.677 Per-NS Atomic Units: Yes 00:13:43.677 Atomic Boundary Size (Normal): 0 00:13:43.677 Atomic Boundary Size (PFail): 0 00:13:43.677 Atomic Boundary Offset: 0 00:13:43.677 Maximum Single Source Range Length: 65535 00:13:43.677 Maximum Copy Length: 65535 00:13:43.677 Maximum Source Range Count: 1 00:13:43.677 NGUID/EUI64 Never Reused: No 00:13:43.677 Namespace Write Protected: No 00:13:43.677 Number of LBA Formats: 1 00:13:43.677 Current LBA Format: LBA Format #00 00:13:43.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:43.677 00:13:43.677 06:43:57 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:50.262 Initializing NVMe Controllers 00:13:50.262 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.262 Initialization complete. Launching workers. 00:13:50.262 ======================================================== 00:13:50.262 Latency(us) 00:13:50.262 Device Information : IOPS MiB/s Average min max 00:13:50.262 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36160.44 141.25 3539.19 1073.93 11205.82 00:13:50.262 ======================================================== 00:13:50.262 Total : 36160.44 141.25 3539.19 1073.93 11205.82 00:13:50.262 00:13:50.262 06:44:02 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:54.450 Initializing NVMe Controllers 00:13:54.450 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:54.450 Initialization complete. Launching workers. 00:13:54.450 ======================================================== 00:13:54.450 Latency(us) 00:13:54.450 Device Information : IOPS MiB/s Average min max 00:13:54.450 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15979.70 62.42 8015.42 3998.57 14071.36 00:13:54.450 ======================================================== 00:13:54.450 Total : 15979.70 62.42 8015.42 3998.57 14071.36 00:13:54.450 00:13:54.450 06:44:08 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:59.714 Initializing NVMe Controllers 00:13:59.714 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:59.714 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:59.714 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:59.714 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:59.714 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:59.714 Initialization complete. Launching workers. 00:13:59.714 Starting thread on core 2 00:13:59.714 Starting thread on core 3 00:13:59.714 Starting thread on core 1 00:13:59.714 06:44:13 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:03.898 Initializing NVMe Controllers 00:14:03.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:03.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:03.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:03.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:03.898 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:03.898 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:03.898 Initialization complete. Launching workers. 00:14:03.898 Starting thread on core 1 with urgent priority queue 00:14:03.898 Starting thread on core 2 with urgent priority queue 00:14:03.898 Starting thread on core 3 with urgent priority queue 00:14:03.898 Starting thread on core 0 with urgent priority queue 00:14:03.898 SPDK bdev Controller (SPDK1 ) core 0: 5463.67 IO/s 18.30 secs/100000 ios 00:14:03.898 SPDK bdev Controller (SPDK1 ) core 1: 5457.33 IO/s 18.32 secs/100000 ios 00:14:03.898 SPDK bdev Controller (SPDK1 ) core 2: 5756.00 IO/s 17.37 secs/100000 ios 00:14:03.898 SPDK bdev Controller (SPDK1 ) core 3: 5779.67 IO/s 17.30 secs/100000 ios 00:14:03.898 ======================================================== 00:14:03.898 00:14:03.898 06:44:17 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:03.898 Initializing NVMe Controllers 00:14:03.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.898 Namespace ID: 1 size: 0GB 00:14:03.898 Initialization complete. 00:14:03.898 INFO: using host memory buffer for IO 00:14:03.898 Hello world! 00:14:03.898 06:44:17 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:04.833 Initializing NVMe Controllers 00:14:04.833 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.833 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.833 Initialization complete. Launching workers. 00:14:04.833 submit (in ns) avg, min, max = 11397.4, 3861.8, 4028055.5 00:14:04.833 complete (in ns) avg, min, max = 22606.1, 2115.5, 4042050.9 00:14:04.833 00:14:04.833 Submit histogram 00:14:04.833 ================ 00:14:04.833 Range in us Cumulative Count 00:14:04.833 3.840 - 3.869: 0.0369% ( 5) 00:14:04.833 3.869 - 3.898: 2.2086% ( 294) 00:14:04.833 3.898 - 3.927: 10.7992% ( 1163) 00:14:04.833 3.927 - 3.956: 22.9207% ( 1641) 00:14:04.833 3.956 - 3.985: 34.8501% ( 1615) 00:14:04.833 3.985 - 4.015: 46.5652% ( 1586) 00:14:04.833 4.015 - 4.044: 57.4826% ( 1478) 00:14:04.833 4.044 - 4.073: 66.3318% ( 1198) 00:14:04.833 4.073 - 4.102: 73.3417% ( 949) 00:14:04.833 4.102 - 4.131: 78.6970% ( 725) 00:14:04.833 4.131 - 4.160: 82.0062% ( 448) 00:14:04.833 4.160 - 4.189: 83.9046% ( 257) 00:14:04.833 4.189 - 4.218: 85.4853% ( 214) 00:14:04.833 4.218 - 4.247: 86.5490% ( 144) 00:14:04.833 4.247 - 4.276: 87.7013% ( 156) 00:14:04.833 4.276 - 4.305: 89.0604% ( 184) 00:14:04.833 4.305 - 4.335: 90.9809% ( 260) 00:14:04.833 4.335 - 4.364: 92.7390% ( 238) 00:14:04.833 4.364 - 4.393: 94.3419% ( 217) 00:14:04.833 4.393 - 4.422: 95.6050% ( 171) 00:14:04.833 4.422 - 4.451: 96.4544% ( 115) 00:14:04.833 4.451 - 4.480: 97.0823% ( 85) 00:14:04.833 4.480 - 4.509: 97.5920% ( 69) 00:14:04.833 4.509 - 4.538: 97.8579% ( 36) 00:14:04.833 4.538 - 4.567: 98.0352% ( 24) 00:14:04.833 4.567 - 4.596: 98.1607% ( 17) 00:14:04.833 4.596 - 4.625: 98.2198% ( 8) 00:14:04.833 4.625 - 4.655: 98.2494% ( 4) 00:14:04.833 4.655 - 4.684: 98.2863% ( 5) 00:14:04.833 4.684 - 4.713: 98.3602% ( 10) 00:14:04.833 4.713 - 4.742: 98.4193% ( 8) 00:14:04.833 4.742 - 4.771: 98.5153% ( 13) 00:14:04.833 4.771 - 4.800: 98.5892% ( 10) 00:14:04.833 4.800 - 4.829: 98.6852% ( 13) 00:14:04.833 4.829 - 4.858: 98.7147% ( 4) 00:14:04.833 4.858 - 4.887: 98.7295% ( 2) 00:14:04.833 4.887 - 4.916: 98.7886% ( 8) 00:14:04.833 4.916 - 4.945: 98.8698% ( 11) 00:14:04.833 4.945 - 4.975: 98.8846% ( 2) 00:14:04.833 4.975 - 5.004: 98.9068% ( 3) 00:14:04.833 5.004 - 5.033: 98.9142% ( 1) 00:14:04.833 5.033 - 5.062: 98.9437% ( 4) 00:14:04.833 5.062 - 5.091: 98.9511% ( 1) 00:14:04.833 5.091 - 5.120: 98.9659% ( 2) 00:14:04.833 5.120 - 5.149: 98.9806% ( 2) 00:14:04.833 5.149 - 5.178: 98.9880% ( 1) 00:14:04.833 5.236 - 5.265: 99.0102% ( 3) 00:14:04.833 5.295 - 5.324: 99.0250% ( 2) 00:14:04.833 5.411 - 5.440: 99.0324% ( 1) 00:14:04.833 5.469 - 5.498: 99.0397% ( 1) 00:14:04.833 5.556 - 5.585: 99.0471% ( 1) 00:14:04.833 5.673 - 5.702: 99.0619% ( 2) 00:14:04.833 6.080 - 6.109: 99.0693% ( 1) 00:14:04.833 9.076 - 9.135: 99.0767% ( 1) 00:14:04.833 9.251 - 9.309: 99.0914% ( 2) 00:14:04.833 9.309 - 9.367: 99.1136% ( 3) 00:14:04.833 9.367 - 9.425: 99.1284% ( 2) 00:14:04.833 9.484 - 9.542: 99.1579% ( 4) 00:14:04.833 9.542 - 9.600: 99.1727% ( 2) 00:14:04.833 9.600 - 9.658: 99.1801% ( 1) 00:14:04.833 9.658 - 9.716: 99.1949% ( 2) 00:14:04.833 9.716 - 9.775: 99.2096% ( 2) 00:14:04.833 9.775 - 9.833: 99.2244% ( 2) 00:14:04.833 9.833 - 9.891: 99.2613% ( 5) 00:14:04.833 9.891 - 9.949: 99.2761% ( 2) 00:14:04.833 9.949 - 10.007: 99.2835% ( 1) 00:14:04.833 10.065 - 10.124: 99.2983% ( 2) 00:14:04.833 10.124 - 10.182: 99.3057% ( 1) 00:14:04.833 10.182 - 10.240: 99.3204% ( 2) 00:14:04.833 10.240 - 10.298: 99.3426% ( 3) 00:14:04.833 10.356 - 10.415: 99.3648% ( 3) 00:14:04.833 10.473 - 10.531: 99.4017% ( 5) 00:14:04.833 10.531 - 10.589: 99.4238% ( 3) 00:14:04.833 10.589 - 10.647: 99.4386% ( 2) 00:14:04.833 10.705 - 10.764: 99.4460% ( 1) 00:14:04.833 10.764 - 10.822: 99.4534% ( 1) 00:14:04.833 10.996 - 11.055: 99.4608% ( 1) 00:14:04.833 11.927 - 11.985: 99.4682% ( 1) 00:14:04.833 12.044 - 12.102: 99.4829% ( 2) 00:14:04.833 15.127 - 15.244: 99.4903% ( 1) 00:14:04.833 16.640 - 16.756: 99.4977% ( 1) 00:14:04.833 16.756 - 16.873: 99.5125% ( 2) 00:14:04.833 16.873 - 16.989: 99.5199% ( 1) 00:14:04.833 18.269 - 18.385: 99.5273% ( 1) 00:14:04.833 18.385 - 18.502: 99.5420% ( 2) 00:14:04.833 18.502 - 18.618: 99.5642% ( 3) 00:14:04.833 18.618 - 18.735: 99.5790% ( 2) 00:14:04.833 18.735 - 18.851: 99.5937% ( 2) 00:14:04.833 18.851 - 18.967: 99.6085% ( 2) 00:14:04.833 18.967 - 19.084: 99.6307% ( 3) 00:14:04.833 19.084 - 19.200: 99.6381% ( 1) 00:14:04.833 19.433 - 19.549: 99.6454% ( 1) 00:14:04.833 19.549 - 19.665: 99.6528% ( 1) 00:14:04.833 19.665 - 19.782: 99.6676% ( 2) 00:14:04.833 19.782 - 19.898: 99.6898% ( 3) 00:14:04.833 19.898 - 20.015: 99.7119% ( 3) 00:14:04.833 20.015 - 20.131: 99.7267% ( 2) 00:14:04.833 20.131 - 20.247: 99.7636% ( 5) 00:14:04.833 20.247 - 20.364: 99.7784% ( 2) 00:14:04.833 20.364 - 20.480: 99.7858% ( 1) 00:14:04.833 20.480 - 20.596: 99.7932% ( 1) 00:14:04.833 24.669 - 24.785: 99.8006% ( 1) 00:14:04.834 25.833 - 25.949: 99.8079% ( 1) 00:14:04.834 26.415 - 26.531: 99.8153% ( 1) 00:14:04.834 3023.593 - 3038.487: 99.8227% ( 1) 00:14:04.834 3038.487 - 3053.382: 99.8301% ( 1) 00:14:04.834 3961.949 - 3991.738: 99.8449% ( 2) 00:14:04.834 3991.738 - 4021.527: 99.9926% ( 20) 00:14:04.834 4021.527 - 4051.316: 100.0000% ( 1) 00:14:04.834 00:14:04.834 Complete histogram 00:14:04.834 ================== 00:14:04.834 Range in us Cumulative Count 00:14:04.834 2.109 - 2.124: 0.0295% ( 4) 00:14:04.834 2.124 - 2.138: 15.3125% ( 2069) 00:14:04.834 2.138 - 2.153: 57.1798% ( 5668) 00:14:04.834 2.153 - 2.167: 64.4852% ( 989) 00:14:04.834 2.167 - 2.182: 64.8693% ( 52) 00:14:04.834 2.182 - 2.196: 65.1278% ( 35) 00:14:04.834 2.196 - 2.211: 68.2745% ( 426) 00:14:04.834 2.211 - 2.225: 87.8933% ( 2656) 00:14:04.834 2.225 - 2.240: 95.0140% ( 964) 00:14:04.834 2.240 - 2.255: 95.8561% ( 114) 00:14:04.834 2.255 - 2.269: 96.4175% ( 76) 00:14:04.834 2.269 - 2.284: 96.9567% ( 73) 00:14:04.834 2.284 - 2.298: 97.4959% ( 73) 00:14:04.834 2.298 - 2.313: 98.0352% ( 73) 00:14:04.834 2.313 - 2.327: 98.2789% ( 33) 00:14:04.834 2.327 - 2.342: 98.4414% ( 22) 00:14:04.834 2.342 - 2.356: 98.5448% ( 14) 00:14:04.834 2.356 - 2.371: 98.5965% ( 7) 00:14:04.834 2.371 - 2.385: 98.6039% ( 1) 00:14:04.834 2.400 - 2.415: 98.6409% ( 5) 00:14:04.834 2.415 - 2.429: 98.6482% ( 1) 00:14:04.834 2.429 - 2.444: 98.6630% ( 2) 00:14:04.834 2.444 - 2.458: 98.6926% ( 4) 00:14:04.834 2.458 - 2.473: 98.7221% ( 4) 00:14:04.834 2.473 - 2.487: 98.7369% ( 2) 00:14:04.834 2.487 - 2.502: 98.7517% ( 2) 00:14:04.834 2.502 - 2.516: 98.7590% ( 1) 00:14:04.834 2.531 - 2.545: 98.7886% ( 4) 00:14:04.834 2.545 - 2.560: 98.8181% ( 4) 00:14:04.834 2.575 - 2.589: 98.8329% ( 2) 00:14:04.834 2.589 - 2.604: 98.8403% ( 1) 00:14:04.834 2.604 - 2.618: 98.8477% ( 1) 00:14:04.834 2.633 - 2.647: 98.8551% ( 1) 00:14:04.834 2.793 - 2.807: 98.8625% ( 1) 00:14:04.834 3.302 - 3.316: 98.8698% ( 1) 00:14:04.834 3.462 - 3.476: 98.8772% ( 1) 00:14:04.834 3.549 - 3.564: 98.8846% ( 1) 00:14:04.834 3.564 - 3.578: 98.8920% ( 1) 00:14:04.834 3.578 - 3.593: 98.9068% ( 2) 00:14:04.834 3.593 - 3.607: 98.9142% ( 1) 00:14:04.834 3.607 - 3.622: 98.9216% ( 1) 00:14:04.834 3.622 - 3.636: 98.9289% ( 1) 00:14:04.834 3.636 - 3.651: 98.9363% ( 1) 00:14:04.834 3.651 - 3.665: 98.9511% ( 2) 00:14:04.834 3.680 - 3.695: 98.9585% ( 1) 00:14:04.834 3.695 - 3.709: 98.9806% ( 3) 00:14:04.834 3.753 - 3.782: 98.9954% ( 2) 00:14:04.834 3.811 - 3.840: 99.0028% ( 1) 00:14:04.834 3.840 - 3.869: 99.0102% ( 1) 00:14:04.834 4.102 - 4.131: 99.0176% ( 1) 00:14:04.834 4.131 - 4.160: 99.0324% ( 2) 00:14:04.834 4.160 - 4.189: 99.0397% ( 1) 00:14:04.834 4.305 - 4.335: 99.0471% ( 1) 00:14:04.834 4.364 - 4.393: 99.0545% ( 1) 00:14:04.834 4.393 - 4.422: 99.0619% ( 1) 00:14:05.092 4.451 - 4.480: 99.0693% ( 1) 00:14:05.092 4.480 - 4.509: 99.0767% ( 1) 00:14:05.092 4.596 - 4.625: 99.0914% ( 2) 00:14:05.092 4.684 - 4.713: 99.0988% ( 1) 00:14:05.092 7.011 - 7.040: 99.1062% ( 1) 00:14:05.092 7.215 - 7.244: 99.1136% ( 1) 00:14:05.092 7.564 - 7.622: 99.1284% ( 2) 00:14:05.092 7.622 - 7.680: 99.1358% ( 1) 00:14:05.092 7.855 - 7.913: 99.1579% ( 3) 00:14:05.092 8.087 - 8.145: 99.1653% ( 1) 00:14:05.092 8.145 - 8.204: 99.1727% ( 1) 00:14:05.092 8.204 - 8.262: 99.1801% ( 1) 00:14:05.092 8.378 - 8.436: 99.1875% ( 1) 00:14:05.092 8.495 - 8.553: 99.2022% ( 2) 00:14:05.092 8.553 - 8.611: 99.2096% ( 1) 00:14:05.092 8.611 - 8.669: 99.2318% ( 3) 00:14:05.092 8.669 - 8.727: 99.2392% ( 1) 00:14:05.093 8.902 - 8.960: 99.2466% ( 1) 00:14:05.093 9.076 - 9.135: 99.2540% ( 1) 00:14:05.093 9.135 - 9.193: 99.2613% ( 1) 00:14:05.093 9.775 - 9.833: 99.2687% ( 1) 00:14:05.093 9.949 - 10.007: 99.2761% ( 1) 00:14:05.093 10.880 - 10.938: 99.2835% ( 1) 00:14:05.093 10.938 - 10.996: 99.2909% ( 1) 00:14:05.093 12.102 - 12.160: 99.2983% ( 1) 00:14:05.093 13.905 - 13.964: 99.3057% ( 1) 00:14:05.093 16.524 - 16.640: 99.3130% ( 1) 00:14:05.093 16.640 - 16.756: 99.3278% ( 2) 00:14:05.093 16.756 - 16.873: 99.3648% ( 5) 00:14:05.093 17.105 - 17.222: 99.3795% ( 2) 00:14:05.093 17.222 - 17.338: 99.3943% ( 2) 00:14:05.093 17.687 - 17.804: 99.4091% ( 2) 00:14:05.093 17.804 - 17.920: 99.4165% ( 1) 00:14:05.093 17.920 - 18.036: 99.4238% ( 1) 00:14:05.093 18.036 - 18.153: 99.4386% ( 2) 00:14:05.093 18.153 - 18.269: 99.4460% ( 1) 00:14:05.093 18.269 - 18.385: 99.4534% ( 1) 00:14:05.093 18.502 - 18.618: 99.4608% ( 1) 00:14:05.093 18.618 - 18.735: 99.4682% ( 1) 00:14:05.093 45.382 - 45.615: 99.4756% ( 1) 00:14:05.093 56.785 - 57.018: 99.4829% ( 1) 00:14:05.093 3023.593 - 3038.487: 99.4903% ( 1) 00:14:05.093 3038.487 - 3053.382: 99.5199% ( 4) 00:14:05.093 3961.949 - 3991.738: 99.5346% ( 2) 00:14:05.093 3991.738 - 4021.527: 99.8744% ( 46) 00:14:05.093 4021.527 - 4051.316: 100.0000% ( 17) 00:14:05.093 00:14:05.093 06:44:18 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:05.093 06:44:18 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:05.093 06:44:18 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:05.093 06:44:18 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:05.093 06:44:18 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.351 [2024-12-14 06:44:19.113052] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:05.351 [ 00:14:05.351 { 00:14:05.351 "allow_any_host": true, 00:14:05.351 "hosts": [], 00:14:05.351 "listen_addresses": [], 00:14:05.351 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.351 "subtype": "Discovery" 00:14:05.351 }, 00:14:05.351 { 00:14:05.351 "allow_any_host": true, 00:14:05.351 "hosts": [], 00:14:05.351 "listen_addresses": [ 00:14:05.351 { 00:14:05.351 "adrfam": "IPv4", 00:14:05.351 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:05.351 "transport": "VFIOUSER", 00:14:05.351 "trsvcid": "0", 00:14:05.351 "trtype": "VFIOUSER" 00:14:05.351 } 00:14:05.351 ], 00:14:05.351 "max_cntlid": 65519, 00:14:05.351 "max_namespaces": 32, 00:14:05.351 "min_cntlid": 1, 00:14:05.351 "model_number": "SPDK bdev Controller", 00:14:05.351 "namespaces": [ 00:14:05.351 { 00:14:05.351 "bdev_name": "Malloc1", 00:14:05.351 "name": "Malloc1", 00:14:05.351 "nguid": "B401C410C84D4F1D96C38F76FDB452A6", 00:14:05.351 "nsid": 1, 00:14:05.351 "uuid": "b401c410-c84d-4f1d-96c3-8f76fdb452a6" 00:14:05.351 } 00:14:05.351 ], 00:14:05.351 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:05.351 "serial_number": "SPDK1", 00:14:05.351 "subtype": "NVMe" 00:14:05.351 }, 00:14:05.351 { 00:14:05.351 "allow_any_host": true, 00:14:05.351 "hosts": [], 00:14:05.351 "listen_addresses": [ 00:14:05.351 { 00:14:05.351 "adrfam": "IPv4", 00:14:05.351 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:05.351 "transport": "VFIOUSER", 00:14:05.351 "trsvcid": "0", 00:14:05.351 "trtype": "VFIOUSER" 00:14:05.351 } 00:14:05.351 ], 00:14:05.351 "max_cntlid": 65519, 00:14:05.351 "max_namespaces": 32, 00:14:05.351 "min_cntlid": 1, 00:14:05.351 "model_number": "SPDK bdev Controller", 00:14:05.351 "namespaces": [ 00:14:05.351 { 00:14:05.351 "bdev_name": "Malloc2", 00:14:05.351 "name": "Malloc2", 00:14:05.351 "nguid": "F761CC9E435C47038C8AB6508A606136", 00:14:05.351 "nsid": 1, 00:14:05.351 "uuid": "f761cc9e-435c-4703-8c8a-b6508a606136" 00:14:05.351 } 00:14:05.351 ], 00:14:05.351 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:05.351 "serial_number": "SPDK2", 00:14:05.351 "subtype": "NVMe" 00:14:05.351 } 00:14:05.351 ] 00:14:05.351 06:44:19 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:05.351 06:44:19 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71277 00:14:05.351 06:44:19 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:05.351 06:44:19 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:05.351 06:44:19 -- common/autotest_common.sh@1254 -- # local i=0 00:14:05.351 06:44:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.351 06:44:19 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:14:05.351 06:44:19 -- common/autotest_common.sh@1257 -- # i=1 00:14:05.351 06:44:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:05.351 06:44:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.351 06:44:19 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:14:05.351 06:44:19 -- common/autotest_common.sh@1257 -- # i=2 00:14:05.351 06:44:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:05.609 06:44:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.609 06:44:19 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:14:05.609 06:44:19 -- common/autotest_common.sh@1257 -- # i=3 00:14:05.609 06:44:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:05.609 06:44:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.609 06:44:19 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.609 06:44:19 -- common/autotest_common.sh@1265 -- # return 0 00:14:05.609 06:44:19 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:05.609 06:44:19 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:05.867 Malloc3 00:14:05.867 06:44:19 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:06.124 06:44:20 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.124 Asynchronous Event Request test 00:14:06.124 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.124 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.124 Registering asynchronous event callbacks... 00:14:06.124 Starting namespace attribute notice tests for all controllers... 00:14:06.124 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:06.124 aer_cb - Changed Namespace 00:14:06.124 Cleaning up... 00:14:06.383 [ 00:14:06.383 { 00:14:06.383 "allow_any_host": true, 00:14:06.383 "hosts": [], 00:14:06.383 "listen_addresses": [], 00:14:06.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.383 "subtype": "Discovery" 00:14:06.383 }, 00:14:06.383 { 00:14:06.383 "allow_any_host": true, 00:14:06.383 "hosts": [], 00:14:06.383 "listen_addresses": [ 00:14:06.383 { 00:14:06.383 "adrfam": "IPv4", 00:14:06.383 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.383 "transport": "VFIOUSER", 00:14:06.383 "trsvcid": "0", 00:14:06.383 "trtype": "VFIOUSER" 00:14:06.383 } 00:14:06.383 ], 00:14:06.383 "max_cntlid": 65519, 00:14:06.383 "max_namespaces": 32, 00:14:06.383 "min_cntlid": 1, 00:14:06.383 "model_number": "SPDK bdev Controller", 00:14:06.383 "namespaces": [ 00:14:06.383 { 00:14:06.383 "bdev_name": "Malloc1", 00:14:06.383 "name": "Malloc1", 00:14:06.383 "nguid": "B401C410C84D4F1D96C38F76FDB452A6", 00:14:06.383 "nsid": 1, 00:14:06.383 "uuid": "b401c410-c84d-4f1d-96c3-8f76fdb452a6" 00:14:06.383 }, 00:14:06.383 { 00:14:06.383 "bdev_name": "Malloc3", 00:14:06.383 "name": "Malloc3", 00:14:06.383 "nguid": "6730FE985A294FCEB7A2F57E72C7EF7B", 00:14:06.383 "nsid": 2, 00:14:06.383 "uuid": "6730fe98-5a29-4fce-b7a2-f57e72c7ef7b" 00:14:06.383 } 00:14:06.383 ], 00:14:06.383 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.383 "serial_number": "SPDK1", 00:14:06.383 "subtype": "NVMe" 00:14:06.383 }, 00:14:06.383 { 00:14:06.383 "allow_any_host": true, 00:14:06.383 "hosts": [], 00:14:06.383 "listen_addresses": [ 00:14:06.383 { 00:14:06.383 "adrfam": "IPv4", 00:14:06.383 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.383 "transport": "VFIOUSER", 00:14:06.383 "trsvcid": "0", 00:14:06.383 "trtype": "VFIOUSER" 00:14:06.383 } 00:14:06.383 ], 00:14:06.383 "max_cntlid": 65519, 00:14:06.383 "max_namespaces": 32, 00:14:06.383 "min_cntlid": 1, 00:14:06.383 "model_number": "SPDK bdev Controller", 00:14:06.383 "namespaces": [ 00:14:06.383 { 00:14:06.383 "bdev_name": "Malloc2", 00:14:06.383 "name": "Malloc2", 00:14:06.383 "nguid": "F761CC9E435C47038C8AB6508A606136", 00:14:06.383 "nsid": 1, 00:14:06.383 "uuid": "f761cc9e-435c-4703-8c8a-b6508a606136" 00:14:06.383 } 00:14:06.383 ], 00:14:06.383 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.383 "serial_number": "SPDK2", 00:14:06.383 "subtype": "NVMe" 00:14:06.383 } 00:14:06.383 ] 00:14:06.383 06:44:20 -- target/nvmf_vfio_user.sh@44 -- # wait 71277 00:14:06.383 06:44:20 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.383 06:44:20 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.383 06:44:20 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.383 06:44:20 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:06.383 [2024-12-14 06:44:20.337441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.383 [2024-12-14 06:44:20.337498] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71315 ] 00:14:06.643 [2024-12-14 06:44:20.481397] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:06.643 [2024-12-14 06:44:20.489234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.643 [2024-12-14 06:44:20.489296] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb2730ed000 00:14:06.643 [2024-12-14 06:44:20.490223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.491220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.492228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.493234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.494240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.495248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.496250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.497255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.643 [2024-12-14 06:44:20.498269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.643 [2024-12-14 06:44:20.498317] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb2728a4000 00:14:06.643 [2024-12-14 06:44:20.499570] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.643 [2024-12-14 06:44:20.516040] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:06.643 [2024-12-14 06:44:20.516120] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:06.643 [2024-12-14 06:44:20.518211] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.643 [2024-12-14 06:44:20.518310] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:06.643 [2024-12-14 06:44:20.518431] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:06.643 [2024-12-14 06:44:20.518466] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:06.643 [2024-12-14 06:44:20.518473] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:06.643 [2024-12-14 06:44:20.519197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:06.643 [2024-12-14 06:44:20.519224] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:06.643 [2024-12-14 06:44:20.519236] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:06.643 [2024-12-14 06:44:20.520197] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.643 [2024-12-14 06:44:20.520222] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:06.643 [2024-12-14 06:44:20.520235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.521201] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:06.643 [2024-12-14 06:44:20.521228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.522205] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:06.643 [2024-12-14 06:44:20.522232] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:06.643 [2024-12-14 06:44:20.522239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.522249] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.522356] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:06.643 [2024-12-14 06:44:20.522362] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.522368] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:06.643 [2024-12-14 06:44:20.523217] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:06.643 [2024-12-14 06:44:20.527971] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:06.643 [2024-12-14 06:44:20.528234] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.643 [2024-12-14 06:44:20.529276] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.643 [2024-12-14 06:44:20.530243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:06.643 [2024-12-14 06:44:20.530281] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.643 [2024-12-14 06:44:20.530288] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:06.643 [2024-12-14 06:44:20.530311] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:06.643 [2024-12-14 06:44:20.530330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.643 [2024-12-14 06:44:20.530349] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.643 [2024-12-14 06:44:20.530356] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.643 [2024-12-14 06:44:20.530375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.643 [2024-12-14 06:44:20.534979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:06.643 [2024-12-14 06:44:20.535026] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:06.643 [2024-12-14 06:44:20.535034] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:06.643 [2024-12-14 06:44:20.535039] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:06.643 [2024-12-14 06:44:20.535044] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:06.643 [2024-12-14 06:44:20.535050] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:06.643 [2024-12-14 06:44:20.535055] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:06.643 [2024-12-14 06:44:20.535061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:06.643 [2024-12-14 06:44:20.535078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.643 [2024-12-14 06:44:20.535092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:06.643 [2024-12-14 06:44:20.542975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:06.643 [2024-12-14 06:44:20.543024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.643 [2024-12-14 06:44:20.543036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.643 [2024-12-14 06:44:20.543046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.644 [2024-12-14 06:44:20.543055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.644 [2024-12-14 06:44:20.543061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.543075] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.543086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.550973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.550995] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:06.644 [2024-12-14 06:44:20.551018] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.551028] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.551040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.551053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.558970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.559068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.559082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.559093] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:06.644 [2024-12-14 06:44:20.559099] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:06.644 [2024-12-14 06:44:20.559107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.566971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.567025] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:06.644 [2024-12-14 06:44:20.567040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.567051] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.567061] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.644 [2024-12-14 06:44:20.567066] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.644 [2024-12-14 06:44:20.567074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.574972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.575026] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.575040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.575050] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.644 [2024-12-14 06:44:20.575056] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.644 [2024-12-14 06:44:20.575064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.582973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.583001] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583028] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583049] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583061] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.644 [2024-12-14 06:44:20.583066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:06.644 [2024-12-14 06:44:20.583072] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:06.644 [2024-12-14 06:44:20.583100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.590990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.591035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.598973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.599019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.607015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.614970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.615020] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:06.644 [2024-12-14 06:44:20.615027] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:06.644 [2024-12-14 06:44:20.615031] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:06.644 [2024-12-14 06:44:20.615035] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:06.644 [2024-12-14 06:44:20.615043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:06.644 [2024-12-14 06:44:20.615052] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:06.644 [2024-12-14 06:44:20.615057] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:06.644 [2024-12-14 06:44:20.615064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.615072] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:06.644 [2024-12-14 06:44:20.615077] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.644 [2024-12-14 06:44:20.615084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.615093] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:06.644 [2024-12-14 06:44:20.615098] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:06.644 [2024-12-14 06:44:20.615104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:06.644 [2024-12-14 06:44:20.622969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.623041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:06.644 [2024-12-14 06:44:20.623050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:06.644 ===================================================== 00:14:06.644 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.644 ===================================================== 00:14:06.644 Controller Capabilities/Features 00:14:06.644 ================================ 00:14:06.644 Vendor ID: 4e58 00:14:06.644 Subsystem Vendor ID: 4e58 00:14:06.644 Serial Number: SPDK2 00:14:06.644 Model Number: SPDK bdev Controller 00:14:06.644 Firmware Version: 24.01.1 00:14:06.644 Recommended Arb Burst: 6 00:14:06.644 IEEE OUI Identifier: 8d 6b 50 00:14:06.644 Multi-path I/O 00:14:06.644 May have multiple subsystem ports: Yes 00:14:06.644 May have multiple controllers: Yes 00:14:06.644 Associated with SR-IOV VF: No 00:14:06.644 Max Data Transfer Size: 131072 00:14:06.644 Max Number of Namespaces: 32 00:14:06.644 Max Number of I/O Queues: 127 00:14:06.644 NVMe Specification Version (VS): 1.3 00:14:06.644 NVMe Specification Version (Identify): 1.3 00:14:06.644 Maximum Queue Entries: 256 00:14:06.644 Contiguous Queues Required: Yes 00:14:06.644 Arbitration Mechanisms Supported 00:14:06.644 Weighted Round Robin: Not Supported 00:14:06.644 Vendor Specific: Not Supported 00:14:06.644 Reset Timeout: 15000 ms 00:14:06.644 Doorbell Stride: 4 bytes 00:14:06.644 NVM Subsystem Reset: Not Supported 00:14:06.644 Command Sets Supported 00:14:06.644 NVM Command Set: Supported 00:14:06.644 Boot Partition: Not Supported 00:14:06.644 Memory Page Size Minimum: 4096 bytes 00:14:06.644 Memory Page Size Maximum: 4096 bytes 00:14:06.644 Persistent Memory Region: Not Supported 00:14:06.644 Optional Asynchronous Events Supported 00:14:06.644 Namespace Attribute Notices: Supported 00:14:06.645 Firmware Activation Notices: Not Supported 00:14:06.645 ANA Change Notices: Not Supported 00:14:06.645 PLE Aggregate Log Change Notices: Not Supported 00:14:06.645 LBA Status Info Alert Notices: Not Supported 00:14:06.645 EGE Aggregate Log Change Notices: Not Supported 00:14:06.645 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.645 Zone Descriptor Change Notices: Not Supported 00:14:06.645 Discovery Log Change Notices: Not Supported 00:14:06.645 Controller Attributes 00:14:06.645 128-bit Host Identifier: Supported 00:14:06.645 Non-Operational Permissive Mode: Not Supported 00:14:06.645 NVM Sets: Not Supported 00:14:06.645 Read Recovery Levels: Not Supported 00:14:06.645 Endurance Groups: Not Supported 00:14:06.645 Predictable Latency Mode: Not Supported 00:14:06.645 Traffic Based Keep ALive: Not Supported 00:14:06.645 Namespace Granularity: Not Supported 00:14:06.645 SQ Associations: Not Supported 00:14:06.645 UUID List: Not Supported 00:14:06.645 Multi-Domain Subsystem: Not Supported 00:14:06.645 Fixed Capacity Management: Not Supported 00:14:06.645 Variable Capacity Management: Not Supported 00:14:06.645 Delete Endurance Group: Not Supported 00:14:06.645 Delete NVM Set: Not Supported 00:14:06.645 Extended LBA Formats Supported: Not Supported 00:14:06.645 Flexible Data Placement Supported: Not Supported 00:14:06.645 00:14:06.645 Controller Memory Buffer Support 00:14:06.645 ================================ 00:14:06.645 Supported: No 00:14:06.645 00:14:06.645 Persistent Memory Region Support 00:14:06.645 ================================ 00:14:06.645 Supported: No 00:14:06.645 00:14:06.645 Admin Command Set Attributes 00:14:06.645 ============================ 00:14:06.645 Security Send/Receive: Not Supported 00:14:06.645 Format NVM: Not Supported 00:14:06.645 Firmware Activate/Download: Not Supported 00:14:06.645 Namespace Management: Not Supported 00:14:06.645 Device Self-Test: Not Supported 00:14:06.645 Directives: Not Supported 00:14:06.645 NVMe-MI: Not Supported 00:14:06.645 Virtualization Management: Not Supported 00:14:06.645 Doorbell Buffer Config: Not Supported 00:14:06.645 Get LBA Status Capability: Not Supported 00:14:06.645 Command & Feature Lockdown Capability: Not Supported 00:14:06.645 Abort Command Limit: 4 00:14:06.645 Async Event Request Limit: 4 00:14:06.645 Number of Firmware Slots: N/A 00:14:06.645 Firmware Slot 1 Read-Only: N/A 00:14:06.645 Firmware Activation Without Reset: N/A 00:14:06.645 Multiple Update Detection Support: N/A 00:14:06.645 Firmware Update Granularity: No Information Provided 00:14:06.645 Per-Namespace SMART Log: No 00:14:06.645 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.645 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:06.645 Command Effects Log Page: Supported 00:14:06.645 Get Log Page Extended Data: Supported 00:14:06.645 Telemetry Log Pages: Not Supported 00:14:06.645 Persistent Event Log Pages: Not Supported 00:14:06.645 Supported Log Pages Log Page: May Support 00:14:06.645 Commands Supported & Effects Log Page: Not Supported 00:14:06.645 Feature Identifiers & Effects Log Page:May Support 00:14:06.645 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.645 Data Area 4 for Telemetry Log: Not Supported 00:14:06.645 Error Log Page Entries Supported: 128 00:14:06.645 Keep Alive: Supported 00:14:06.645 Keep Alive Granularity: 10000 ms 00:14:06.645 00:14:06.645 NVM Command Set Attributes 00:14:06.645 ========================== 00:14:06.645 Submission Queue Entry Size 00:14:06.645 Max: 64 00:14:06.645 Min: 64 00:14:06.645 Completion Queue Entry Size 00:14:06.645 Max: 16 00:14:06.645 Min: 16 00:14:06.645 Number of Namespaces: 32 00:14:06.645 Compare Command: Supported 00:14:06.645 Write Uncorrectable Command: Not Supported 00:14:06.645 Dataset Management Command: Supported 00:14:06.645 Write Zeroes Command: Supported 00:14:06.645 Set Features Save Field: Not Supported 00:14:06.645 Reservations: Not Supported 00:14:06.645 Timestamp: Not Supported 00:14:06.645 Copy: Supported 00:14:06.645 Volatile Write Cache: Present 00:14:06.645 Atomic Write Unit (Normal): 1 00:14:06.645 Atomic Write Unit (PFail): 1 00:14:06.645 Atomic Compare & Write Unit: 1 00:14:06.645 Fused Compare & Write: Supported 00:14:06.645 Scatter-Gather List 00:14:06.645 SGL Command Set: Supported (Dword aligned) 00:14:06.645 SGL Keyed: Not Supported 00:14:06.645 SGL Bit Bucket Descriptor: Not Supported 00:14:06.645 SGL Metadata Pointer: Not Supported 00:14:06.645 Oversized SGL: Not Supported 00:14:06.645 SGL Metadata Address: Not Supported 00:14:06.645 SGL Offset: Not Supported 00:14:06.645 Transport SGL Data Block: Not Supported 00:14:06.645 Replay Protected Memory Block: Not Supported 00:14:06.645 00:14:06.645 Firmware Slot Information 00:14:06.645 ========================= 00:14:06.645 Active slot: 1 00:14:06.645 Slot 1 Firmware Revision: 24.01.1 00:14:06.645 00:14:06.645 00:14:06.645 Commands Supported and Effects 00:14:06.645 ============================== 00:14:06.645 Admin Commands 00:14:06.645 -------------- 00:14:06.645 Get Log Page (02h): Supported 00:14:06.645 Identify (06h): Supported 00:14:06.645 Abort (08h): Supported 00:14:06.645 Set Features (09h): Supported 00:14:06.645 Get Features (0Ah): Supported 00:14:06.645 Asynchronous Event Request (0Ch): Supported 00:14:06.645 Keep Alive (18h): Supported 00:14:06.645 I/O Commands 00:14:06.645 ------------ 00:14:06.645 Flush (00h): Supported LBA-Change 00:14:06.645 Write (01h): Supported LBA-Change 00:14:06.645 Read (02h): Supported 00:14:06.645 Compare (05h): Supported 00:14:06.645 Write Zeroes (08h): Supported LBA-Change 00:14:06.645 Dataset Management (09h): Supported LBA-Change 00:14:06.645 Copy (19h): Supported LBA-Change 00:14:06.645 Unknown (79h): Supported LBA-Change 00:14:06.645 Unknown (7Ah): Supported 00:14:06.645 00:14:06.645 Error Log 00:14:06.645 ========= 00:14:06.645 00:14:06.645 Arbitration 00:14:06.645 =========== 00:14:06.645 Arbitration Burst: 1 00:14:06.645 00:14:06.645 Power Management 00:14:06.645 ================ 00:14:06.645 Number of Power States: 1 00:14:06.645 Current Power State: Power State #0 00:14:06.645 Power State #0: 00:14:06.645 Max Power: 0.00 W 00:14:06.645 Non-Operational State: Operational 00:14:06.645 Entry Latency: Not Reported 00:14:06.645 Exit Latency: Not Reported 00:14:06.645 Relative Read Throughput: 0 00:14:06.645 Relative Read Latency: 0 00:14:06.645 Relative Write Throughput: 0 00:14:06.645 Relative Write Latency: 0 00:14:06.645 Idle Power: Not Reported 00:14:06.645 Active Power: Not Reported 00:14:06.645 Non-Operational Permissive Mode: Not Supported 00:14:06.645 00:14:06.645 Health Information 00:14:06.645 ================== 00:14:06.645 Critical Warnings: 00:14:06.645 Available Spare Space: OK 00:14:06.645 Temperature: OK 00:14:06.645 Device Reliability: OK 00:14:06.645 Read Only: No 00:14:06.645 Volatile Memory Backup: OK 00:14:06.645 Current Temperature: 0 Kelvin[2024-12-14 06:44:20.623183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:06.645 [2024-12-14 06:44:20.630969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:06.645 [2024-12-14 06:44:20.631047] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:06.645 [2024-12-14 06:44:20.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.645 [2024-12-14 06:44:20.631070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.645 [2024-12-14 06:44:20.631078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.645 [2024-12-14 06:44:20.631085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.645 [2024-12-14 06:44:20.631161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.645 [2024-12-14 06:44:20.631180] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:06.645 [2024-12-14 06:44:20.632218] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:06.645 [2024-12-14 06:44:20.632240] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:06.904 [2024-12-14 06:44:20.633162] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:06.904 [2024-12-14 06:44:20.633193] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:06.904 [2024-12-14 06:44:20.633424] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:06.904 [2024-12-14 06:44:20.635958] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.904 (-273 Celsius) 00:14:06.904 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.904 Available Spare: 0% 00:14:06.904 Available Spare Threshold: 0% 00:14:06.904 Life Percentage Used: 0% 00:14:06.904 Data Units Read: 0 00:14:06.904 Data Units Written: 0 00:14:06.904 Host Read Commands: 0 00:14:06.904 Host Write Commands: 0 00:14:06.904 Controller Busy Time: 0 minutes 00:14:06.904 Power Cycles: 0 00:14:06.904 Power On Hours: 0 hours 00:14:06.904 Unsafe Shutdowns: 0 00:14:06.904 Unrecoverable Media Errors: 0 00:14:06.904 Lifetime Error Log Entries: 0 00:14:06.904 Warning Temperature Time: 0 minutes 00:14:06.904 Critical Temperature Time: 0 minutes 00:14:06.904 00:14:06.904 Number of Queues 00:14:06.904 ================ 00:14:06.904 Number of I/O Submission Queues: 127 00:14:06.904 Number of I/O Completion Queues: 127 00:14:06.904 00:14:06.904 Active Namespaces 00:14:06.904 ================= 00:14:06.904 Namespace ID:1 00:14:06.904 Error Recovery Timeout: Unlimited 00:14:06.904 Command Set Identifier: NVM (00h) 00:14:06.904 Deallocate: Supported 00:14:06.904 Deallocated/Unwritten Error: Not Supported 00:14:06.904 Deallocated Read Value: Unknown 00:14:06.904 Deallocate in Write Zeroes: Not Supported 00:14:06.904 Deallocated Guard Field: 0xFFFF 00:14:06.904 Flush: Supported 00:14:06.904 Reservation: Supported 00:14:06.904 Namespace Sharing Capabilities: Multiple Controllers 00:14:06.904 Size (in LBAs): 131072 (0GiB) 00:14:06.904 Capacity (in LBAs): 131072 (0GiB) 00:14:06.904 Utilization (in LBAs): 131072 (0GiB) 00:14:06.904 NGUID: F761CC9E435C47038C8AB6508A606136 00:14:06.904 UUID: f761cc9e-435c-4703-8c8a-b6508a606136 00:14:06.904 Thin Provisioning: Not Supported 00:14:06.904 Per-NS Atomic Units: Yes 00:14:06.904 Atomic Boundary Size (Normal): 0 00:14:06.904 Atomic Boundary Size (PFail): 0 00:14:06.904 Atomic Boundary Offset: 0 00:14:06.904 Maximum Single Source Range Length: 65535 00:14:06.904 Maximum Copy Length: 65535 00:14:06.904 Maximum Source Range Count: 1 00:14:06.904 NGUID/EUI64 Never Reused: No 00:14:06.904 Namespace Write Protected: No 00:14:06.904 Number of LBA Formats: 1 00:14:06.904 Current LBA Format: LBA Format #00 00:14:06.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:06.904 00:14:06.904 06:44:20 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:12.171 Initializing NVMe Controllers 00:14:12.171 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:12.171 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:12.171 Initialization complete. Launching workers. 00:14:12.171 ======================================================== 00:14:12.171 Latency(us) 00:14:12.171 Device Information : IOPS MiB/s Average min max 00:14:12.171 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34919.31 136.40 3664.89 1202.26 9899.90 00:14:12.171 ======================================================== 00:14:12.171 Total : 34919.31 136.40 3664.89 1202.26 9899.90 00:14:12.171 00:14:12.171 06:44:26 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:18.757 Initializing NVMe Controllers 00:14:18.757 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:18.757 Initialization complete. Launching workers. 00:14:18.757 ======================================================== 00:14:18.757 Latency(us) 00:14:18.757 Device Information : IOPS MiB/s Average min max 00:14:18.757 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35801.76 139.85 3574.99 1150.71 9626.92 00:14:18.757 ======================================================== 00:14:18.757 Total : 35801.76 139.85 3574.99 1150.71 9626.92 00:14:18.757 00:14:18.757 06:44:31 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:22.943 Initializing NVMe Controllers 00:14:22.943 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.943 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:22.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:22.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:22.943 Initialization complete. Launching workers. 00:14:22.943 Starting thread on core 2 00:14:22.943 Starting thread on core 3 00:14:22.943 Starting thread on core 1 00:14:22.943 06:44:36 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:27.132 Initializing NVMe Controllers 00:14:27.132 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.132 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:27.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:27.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:27.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:27.132 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:27.132 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:27.132 Initialization complete. Launching workers. 00:14:27.132 Starting thread on core 1 with urgent priority queue 00:14:27.132 Starting thread on core 2 with urgent priority queue 00:14:27.132 Starting thread on core 3 with urgent priority queue 00:14:27.132 Starting thread on core 0 with urgent priority queue 00:14:27.132 SPDK bdev Controller (SPDK2 ) core 0: 4422.67 IO/s 22.61 secs/100000 ios 00:14:27.132 SPDK bdev Controller (SPDK2 ) core 1: 4166.33 IO/s 24.00 secs/100000 ios 00:14:27.132 SPDK bdev Controller (SPDK2 ) core 2: 4934.00 IO/s 20.27 secs/100000 ios 00:14:27.132 SPDK bdev Controller (SPDK2 ) core 3: 4596.00 IO/s 21.76 secs/100000 ios 00:14:27.132 ======================================================== 00:14:27.132 00:14:27.132 06:44:40 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.132 Initializing NVMe Controllers 00:14:27.132 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.132 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.132 Namespace ID: 1 size: 0GB 00:14:27.132 Initialization complete. 00:14:27.132 INFO: using host memory buffer for IO 00:14:27.132 Hello world! 00:14:27.132 06:44:40 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:28.068 Initializing NVMe Controllers 00:14:28.068 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.068 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.068 Initialization complete. Launching workers. 00:14:28.068 submit (in ns) avg, min, max = 10265.5, 3948.6, 4021515.5 00:14:28.068 complete (in ns) avg, min, max = 25980.9, 2205.5, 7013337.3 00:14:28.068 00:14:28.068 Submit histogram 00:14:28.068 ================ 00:14:28.068 Range in us Cumulative Count 00:14:28.068 3.927 - 3.956: 0.0072% ( 1) 00:14:28.068 3.956 - 3.985: 0.3906% ( 53) 00:14:28.068 3.985 - 4.015: 6.6474% ( 865) 00:14:28.068 4.015 - 4.044: 23.2622% ( 2297) 00:14:28.068 4.044 - 4.073: 39.0018% ( 2176) 00:14:28.068 4.073 - 4.102: 52.8101% ( 1909) 00:14:28.068 4.102 - 4.131: 63.9783% ( 1544) 00:14:28.068 4.131 - 4.160: 75.1537% ( 1545) 00:14:28.068 4.160 - 4.189: 81.1501% ( 829) 00:14:28.068 4.189 - 4.218: 83.2694% ( 293) 00:14:28.068 4.218 - 4.247: 84.6221% ( 187) 00:14:28.068 4.247 - 4.276: 85.5190% ( 124) 00:14:28.068 4.276 - 4.305: 86.1627% ( 89) 00:14:28.068 4.305 - 4.335: 87.2911% ( 156) 00:14:28.068 4.335 - 4.364: 88.1302% ( 116) 00:14:28.068 4.364 - 4.393: 89.2369% ( 153) 00:14:28.068 4.393 - 4.422: 91.4358% ( 304) 00:14:28.068 4.422 - 4.451: 93.6854% ( 311) 00:14:28.068 4.451 - 4.480: 95.2116% ( 211) 00:14:28.068 4.480 - 4.509: 96.8608% ( 228) 00:14:28.068 4.509 - 4.538: 97.8445% ( 136) 00:14:28.068 4.538 - 4.567: 98.2206% ( 52) 00:14:28.068 4.567 - 4.596: 98.3580% ( 19) 00:14:28.068 4.596 - 4.625: 98.4448% ( 12) 00:14:28.068 4.625 - 4.655: 98.4955% ( 7) 00:14:28.068 4.655 - 4.684: 98.5244% ( 4) 00:14:28.068 4.713 - 4.742: 98.5461% ( 3) 00:14:28.068 4.742 - 4.771: 98.5606% ( 2) 00:14:28.068 4.771 - 4.800: 98.5750% ( 2) 00:14:28.068 4.800 - 4.829: 98.6112% ( 5) 00:14:28.068 4.829 - 4.858: 98.6184% ( 1) 00:14:28.068 4.858 - 4.887: 98.6546% ( 5) 00:14:28.068 4.887 - 4.916: 98.6908% ( 5) 00:14:28.068 4.916 - 4.945: 98.7486% ( 8) 00:14:28.068 4.945 - 4.975: 98.8137% ( 9) 00:14:28.068 4.975 - 5.004: 98.8644% ( 7) 00:14:28.068 5.004 - 5.033: 98.9512% ( 12) 00:14:28.068 5.033 - 5.062: 99.0235% ( 10) 00:14:28.068 5.062 - 5.091: 99.0597% ( 5) 00:14:28.068 5.091 - 5.120: 99.1103% ( 7) 00:14:28.068 5.120 - 5.149: 99.1465% ( 5) 00:14:28.068 5.149 - 5.178: 99.1609% ( 2) 00:14:28.068 5.178 - 5.207: 99.1754% ( 2) 00:14:28.068 5.207 - 5.236: 99.1899% ( 2) 00:14:28.068 5.236 - 5.265: 99.2260% ( 5) 00:14:28.068 5.295 - 5.324: 99.2405% ( 2) 00:14:28.068 5.324 - 5.353: 99.2550% ( 2) 00:14:28.068 5.353 - 5.382: 99.2622% ( 1) 00:14:28.068 5.382 - 5.411: 99.2911% ( 4) 00:14:28.068 5.440 - 5.469: 99.2984% ( 1) 00:14:28.068 5.469 - 5.498: 99.3056% ( 1) 00:14:28.068 5.498 - 5.527: 99.3128% ( 1) 00:14:28.068 5.527 - 5.556: 99.3201% ( 1) 00:14:28.068 5.556 - 5.585: 99.3273% ( 1) 00:14:28.068 5.585 - 5.615: 99.3345% ( 1) 00:14:28.068 5.644 - 5.673: 99.3418% ( 1) 00:14:28.068 5.702 - 5.731: 99.3490% ( 1) 00:14:28.068 5.964 - 5.993: 99.3562% ( 1) 00:14:28.068 7.913 - 7.971: 99.3635% ( 1) 00:14:28.068 9.309 - 9.367: 99.3707% ( 1) 00:14:28.068 9.367 - 9.425: 99.3779% ( 1) 00:14:28.068 9.425 - 9.484: 99.3924% ( 2) 00:14:28.068 9.484 - 9.542: 99.4213% ( 4) 00:14:28.068 9.542 - 9.600: 99.4358% ( 2) 00:14:28.068 9.600 - 9.658: 99.4792% ( 6) 00:14:28.068 9.658 - 9.716: 99.4864% ( 1) 00:14:28.068 9.716 - 9.775: 99.4937% ( 1) 00:14:28.068 9.775 - 9.833: 99.5226% ( 4) 00:14:28.068 9.833 - 9.891: 99.5298% ( 1) 00:14:28.068 9.891 - 9.949: 99.5443% ( 2) 00:14:28.068 9.949 - 10.007: 99.5588% ( 2) 00:14:28.068 10.007 - 10.065: 99.5660% ( 1) 00:14:28.068 10.065 - 10.124: 99.5732% ( 1) 00:14:28.068 10.124 - 10.182: 99.5949% ( 3) 00:14:28.068 10.182 - 10.240: 99.6094% ( 2) 00:14:28.068 10.240 - 10.298: 99.6166% ( 1) 00:14:28.068 10.298 - 10.356: 99.6239% ( 1) 00:14:28.068 10.356 - 10.415: 99.6456% ( 3) 00:14:28.068 10.473 - 10.531: 99.6673% ( 3) 00:14:28.068 10.531 - 10.589: 99.6890% ( 3) 00:14:28.068 10.589 - 10.647: 99.7034% ( 2) 00:14:28.068 10.647 - 10.705: 99.7107% ( 1) 00:14:28.068 10.764 - 10.822: 99.7251% ( 2) 00:14:28.068 10.822 - 10.880: 99.7324% ( 1) 00:14:28.068 10.880 - 10.938: 99.7468% ( 2) 00:14:28.068 10.938 - 10.996: 99.7613% ( 2) 00:14:28.068 11.113 - 11.171: 99.7685% ( 1) 00:14:28.068 11.404 - 11.462: 99.7758% ( 1) 00:14:28.068 12.044 - 12.102: 99.7830% ( 1) 00:14:28.068 12.102 - 12.160: 99.7902% ( 1) 00:14:28.068 12.218 - 12.276: 99.8047% ( 2) 00:14:28.068 12.451 - 12.509: 99.8119% ( 1) 00:14:28.068 12.509 - 12.567: 99.8192% ( 1) 00:14:28.068 12.742 - 12.800: 99.8264% ( 1) 00:14:28.068 15.593 - 15.709: 99.8336% ( 1) 00:14:28.068 17.338 - 17.455: 99.8409% ( 1) 00:14:28.068 18.036 - 18.153: 99.8481% ( 1) 00:14:28.068 3991.738 - 4021.527: 100.0000% ( 21) 00:14:28.068 00:14:28.068 Complete histogram 00:14:28.068 ================== 00:14:28.068 Range in us Cumulative Count 00:14:28.068 2.196 - 2.211: 0.0868% ( 12) 00:14:28.068 2.211 - 2.225: 22.0832% ( 3041) 00:14:28.068 2.225 - 2.240: 78.5750% ( 7810) 00:14:28.068 2.240 - 2.255: 93.6564% ( 2085) 00:14:28.068 2.255 - 2.269: 94.6184% ( 133) 00:14:28.068 2.269 - 2.284: 95.1465% ( 73) 00:14:28.068 2.284 - 2.298: 95.7685% ( 86) 00:14:28.068 2.298 - 2.313: 96.4485% ( 94) 00:14:28.068 2.313 - 2.327: 96.9910% ( 75) 00:14:28.068 2.327 - 2.342: 97.4322% ( 61) 00:14:28.068 2.342 - 2.356: 97.8879% ( 63) 00:14:28.068 2.356 - 2.371: 98.2278% ( 47) 00:14:28.068 2.371 - 2.385: 98.4955% ( 37) 00:14:28.068 2.385 - 2.400: 98.6691% ( 24) 00:14:28.068 2.400 - 2.415: 98.7197% ( 7) 00:14:28.068 2.415 - 2.429: 98.7631% ( 6) 00:14:28.068 2.429 - 2.444: 98.7703% ( 1) 00:14:28.068 2.444 - 2.458: 98.7776% ( 1) 00:14:28.068 2.458 - 2.473: 98.7920% ( 2) 00:14:28.068 2.473 - 2.487: 98.8065% ( 2) 00:14:28.068 2.516 - 2.531: 98.8137% ( 1) 00:14:28.068 2.531 - 2.545: 98.8499% ( 5) 00:14:28.068 2.545 - 2.560: 98.8788% ( 4) 00:14:28.068 2.560 - 2.575: 98.9222% ( 6) 00:14:28.068 2.589 - 2.604: 98.9439% ( 3) 00:14:28.068 2.604 - 2.618: 98.9512% ( 1) 00:14:28.068 2.647 - 2.662: 98.9584% ( 1) 00:14:28.068 2.662 - 2.676: 98.9656% ( 1) 00:14:28.068 2.735 - 2.749: 98.9729% ( 1) 00:14:28.068 3.142 - 3.156: 98.9801% ( 1) 00:14:28.068 3.375 - 3.389: 98.9873% ( 1) 00:14:28.068 3.462 - 3.476: 98.9946% ( 1) 00:14:28.068 3.535 - 3.549: 99.0018% ( 1) 00:14:28.068 3.549 - 3.564: 99.0090% ( 1) 00:14:28.068 3.578 - 3.593: 99.0163% ( 1) 00:14:28.068 3.607 - 3.622: 99.0235% ( 1) 00:14:28.068 3.651 - 3.665: 99.0380% ( 2) 00:14:28.068 3.665 - 3.680: 99.0452% ( 1) 00:14:28.069 3.695 - 3.709: 99.0524% ( 1) 00:14:28.069 3.709 - 3.724: 99.0669% ( 2) 00:14:28.069 3.724 - 3.753: 99.0741% ( 1) 00:14:28.069 3.753 - 3.782: 99.0814% ( 1) 00:14:28.069 3.782 - 3.811: 99.1031% ( 3) 00:14:28.069 3.811 - 3.840: 99.1103% ( 1) 00:14:28.069 3.869 - 3.898: 99.1175% ( 1) 00:14:28.069 3.898 - 3.927: 99.1248% ( 1) 00:14:28.069 3.927 - 3.956: 99.1320% ( 1) 00:14:28.069 3.985 - 4.015: 99.1392% ( 1) 00:14:28.069 4.015 - 4.044: 99.1465% ( 1) 00:14:28.069 4.102 - 4.131: 99.1537% ( 1) 00:14:28.069 5.120 - 5.149: 99.1609% ( 1) 00:14:28.069 7.447 - 7.505: 99.1682% ( 1) 00:14:28.069 7.505 - 7.564: 99.1754% ( 1) 00:14:28.069 7.622 - 7.680: 99.1971% ( 3) 00:14:28.069 7.680 - 7.738: 99.2260% ( 4) 00:14:28.069 7.796 - 7.855: 99.2405% ( 2) 00:14:28.069 7.913 - 7.971: 99.2477% ( 1) 00:14:28.069 8.029 - 8.087: 99.2550% ( 1) 00:14:28.069 8.145 - 8.204: 99.2694% ( 2) 00:14:28.069 8.204 - 8.262: 99.2839% ( 2) 00:14:28.069 8.378 - 8.436: 99.2911% ( 1) 00:14:28.069 8.553 - 8.611: 99.3056% ( 2) 00:14:28.069 8.611 - 8.669: 99.3201% ( 2) 00:14:28.069 8.669 - 8.727: 99.3273% ( 1) 00:14:28.069 8.902 - 8.960: 99.3345% ( 1) 00:14:28.069 8.960 - 9.018: 99.3418% ( 1) 00:14:28.328 9.193 - 9.251: 99.3490% ( 1) 00:14:28.328 9.251 - 9.309: 99.3562% ( 1) 00:14:28.328 9.542 - 9.600: 99.3635% ( 1) 00:14:28.328 9.775 - 9.833: 99.3707% ( 1) 00:14:28.328 9.891 - 9.949: 99.3779% ( 1) 00:14:28.328 14.895 - 15.011: 99.3852% ( 1) 00:14:28.328 16.058 - 16.175: 99.3924% ( 1) 00:14:28.328 17.455 - 17.571: 99.3996% ( 1) 00:14:28.328 46.080 - 46.313: 99.4069% ( 1) 00:14:28.328 58.880 - 59.113: 99.4141% ( 1) 00:14:28.328 1050.065 - 1057.513: 99.4213% ( 1) 00:14:28.328 3991.738 - 4021.527: 99.9855% ( 78) 00:14:28.328 7000.436 - 7030.225: 100.0000% ( 2) 00:14:28.328 00:14:28.328 06:44:42 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:28.328 06:44:42 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:28.328 06:44:42 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:28.328 06:44:42 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:28.328 06:44:42 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.587 [ 00:14:28.587 { 00:14:28.587 "allow_any_host": true, 00:14:28.587 "hosts": [], 00:14:28.587 "listen_addresses": [], 00:14:28.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.587 "subtype": "Discovery" 00:14:28.587 }, 00:14:28.587 { 00:14:28.587 "allow_any_host": true, 00:14:28.587 "hosts": [], 00:14:28.587 "listen_addresses": [ 00:14:28.587 { 00:14:28.587 "adrfam": "IPv4", 00:14:28.587 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:28.587 "transport": "VFIOUSER", 00:14:28.587 "trsvcid": "0", 00:14:28.587 "trtype": "VFIOUSER" 00:14:28.587 } 00:14:28.587 ], 00:14:28.587 "max_cntlid": 65519, 00:14:28.587 "max_namespaces": 32, 00:14:28.587 "min_cntlid": 1, 00:14:28.587 "model_number": "SPDK bdev Controller", 00:14:28.587 "namespaces": [ 00:14:28.587 { 00:14:28.587 "bdev_name": "Malloc1", 00:14:28.587 "name": "Malloc1", 00:14:28.587 "nguid": "B401C410C84D4F1D96C38F76FDB452A6", 00:14:28.587 "nsid": 1, 00:14:28.587 "uuid": "b401c410-c84d-4f1d-96c3-8f76fdb452a6" 00:14:28.587 }, 00:14:28.587 { 00:14:28.587 "bdev_name": "Malloc3", 00:14:28.587 "name": "Malloc3", 00:14:28.587 "nguid": "6730FE985A294FCEB7A2F57E72C7EF7B", 00:14:28.587 "nsid": 2, 00:14:28.587 "uuid": "6730fe98-5a29-4fce-b7a2-f57e72c7ef7b" 00:14:28.587 } 00:14:28.587 ], 00:14:28.587 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:28.587 "serial_number": "SPDK1", 00:14:28.587 "subtype": "NVMe" 00:14:28.587 }, 00:14:28.587 { 00:14:28.587 "allow_any_host": true, 00:14:28.587 "hosts": [], 00:14:28.587 "listen_addresses": [ 00:14:28.587 { 00:14:28.587 "adrfam": "IPv4", 00:14:28.587 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:28.587 "transport": "VFIOUSER", 00:14:28.587 "trsvcid": "0", 00:14:28.587 "trtype": "VFIOUSER" 00:14:28.587 } 00:14:28.587 ], 00:14:28.587 "max_cntlid": 65519, 00:14:28.587 "max_namespaces": 32, 00:14:28.587 "min_cntlid": 1, 00:14:28.587 "model_number": "SPDK bdev Controller", 00:14:28.587 "namespaces": [ 00:14:28.587 { 00:14:28.587 "bdev_name": "Malloc2", 00:14:28.587 "name": "Malloc2", 00:14:28.587 "nguid": "F761CC9E435C47038C8AB6508A606136", 00:14:28.587 "nsid": 1, 00:14:28.587 "uuid": "f761cc9e-435c-4703-8c8a-b6508a606136" 00:14:28.587 } 00:14:28.587 ], 00:14:28.587 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:28.587 "serial_number": "SPDK2", 00:14:28.587 "subtype": "NVMe" 00:14:28.587 } 00:14:28.587 ] 00:14:28.587 06:44:42 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:28.587 06:44:42 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71572 00:14:28.587 06:44:42 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:28.587 06:44:42 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:28.587 06:44:42 -- common/autotest_common.sh@1254 -- # local i=0 00:14:28.587 06:44:42 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.587 06:44:42 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:14:28.587 06:44:42 -- common/autotest_common.sh@1257 -- # i=1 00:14:28.587 06:44:42 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:28.587 06:44:42 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.587 06:44:42 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:14:28.587 06:44:42 -- common/autotest_common.sh@1257 -- # i=2 00:14:28.587 06:44:42 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:28.846 06:44:42 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.846 06:44:42 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:14:28.846 06:44:42 -- common/autotest_common.sh@1257 -- # i=3 00:14:28.846 06:44:42 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:28.846 06:44:42 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.846 06:44:42 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.846 06:44:42 -- common/autotest_common.sh@1265 -- # return 0 00:14:28.846 06:44:42 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:28.846 06:44:42 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:29.105 Malloc4 00:14:29.364 06:44:43 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:29.364 06:44:43 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:29.635 Asynchronous Event Request test 00:14:29.635 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.635 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.635 Registering asynchronous event callbacks... 00:14:29.635 Starting namespace attribute notice tests for all controllers... 00:14:29.635 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:29.635 aer_cb - Changed Namespace 00:14:29.635 Cleaning up... 00:14:29.635 [ 00:14:29.635 { 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [], 00:14:29.635 "listen_addresses": [], 00:14:29.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.635 "subtype": "Discovery" 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [], 00:14:29.635 "listen_addresses": [ 00:14:29.635 { 00:14:29.635 "adrfam": "IPv4", 00:14:29.635 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.635 "transport": "VFIOUSER", 00:14:29.635 "trsvcid": "0", 00:14:29.635 "trtype": "VFIOUSER" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "max_cntlid": 65519, 00:14:29.635 "max_namespaces": 32, 00:14:29.635 "min_cntlid": 1, 00:14:29.635 "model_number": "SPDK bdev Controller", 00:14:29.635 "namespaces": [ 00:14:29.635 { 00:14:29.635 "bdev_name": "Malloc1", 00:14:29.635 "name": "Malloc1", 00:14:29.635 "nguid": "B401C410C84D4F1D96C38F76FDB452A6", 00:14:29.635 "nsid": 1, 00:14:29.635 "uuid": "b401c410-c84d-4f1d-96c3-8f76fdb452a6" 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "bdev_name": "Malloc3", 00:14:29.635 "name": "Malloc3", 00:14:29.635 "nguid": "6730FE985A294FCEB7A2F57E72C7EF7B", 00:14:29.635 "nsid": 2, 00:14:29.635 "uuid": "6730fe98-5a29-4fce-b7a2-f57e72c7ef7b" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.635 "serial_number": "SPDK1", 00:14:29.635 "subtype": "NVMe" 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [], 00:14:29.635 "listen_addresses": [ 00:14:29.635 { 00:14:29.635 "adrfam": "IPv4", 00:14:29.635 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.635 "transport": "VFIOUSER", 00:14:29.635 "trsvcid": "0", 00:14:29.635 "trtype": "VFIOUSER" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "max_cntlid": 65519, 00:14:29.635 "max_namespaces": 32, 00:14:29.635 "min_cntlid": 1, 00:14:29.635 "model_number": "SPDK bdev Controller", 00:14:29.635 "namespaces": [ 00:14:29.635 { 00:14:29.635 "bdev_name": "Malloc2", 00:14:29.635 "name": "Malloc2", 00:14:29.635 "nguid": "F761CC9E435C47038C8AB6508A606136", 00:14:29.635 "nsid": 1, 00:14:29.635 "uuid": "f761cc9e-435c-4703-8c8a-b6508a606136" 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "bdev_name": "Malloc4", 00:14:29.635 "name": "Malloc4", 00:14:29.635 "nguid": "CCCBFE5D1A2943F88343332ACEDC74BC", 00:14:29.635 "nsid": 2, 00:14:29.635 "uuid": "cccbfe5d-1a29-43f8-8343-332acedc74bc" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.635 "serial_number": "SPDK2", 00:14:29.635 "subtype": "NVMe" 00:14:29.635 } 00:14:29.635 ] 00:14:29.635 06:44:43 -- target/nvmf_vfio_user.sh@44 -- # wait 71572 00:14:29.635 06:44:43 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:29.635 06:44:43 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70880 00:14:29.635 06:44:43 -- common/autotest_common.sh@936 -- # '[' -z 70880 ']' 00:14:29.636 06:44:43 -- common/autotest_common.sh@940 -- # kill -0 70880 00:14:29.636 06:44:43 -- common/autotest_common.sh@941 -- # uname 00:14:29.636 06:44:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.636 06:44:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70880 00:14:29.922 killing process with pid 70880 00:14:29.922 06:44:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.922 06:44:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.922 06:44:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70880' 00:14:29.922 06:44:43 -- common/autotest_common.sh@955 -- # kill 70880 00:14:29.922 [2024-12-14 06:44:43.627936] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:29.922 06:44:43 -- common/autotest_common.sh@960 -- # wait 70880 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71619 00:14:30.181 Process pid: 71619 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71619' 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.181 06:44:44 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71619 00:14:30.181 06:44:44 -- common/autotest_common.sh@829 -- # '[' -z 71619 ']' 00:14:30.181 06:44:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.181 06:44:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.181 06:44:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.181 06:44:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.181 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.181 [2024-12-14 06:44:44.164685] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:30.181 [2024-12-14 06:44:44.166078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:30.181 [2024-12-14 06:44:44.166156] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.439 [2024-12-14 06:44:44.301054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.698 [2024-12-14 06:44:44.446053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.698 [2024-12-14 06:44:44.446253] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.698 [2024-12-14 06:44:44.446266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.698 [2024-12-14 06:44:44.446274] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.698 [2024-12-14 06:44:44.446422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.698 [2024-12-14 06:44:44.446753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.698 [2024-12-14 06:44:44.447365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.698 [2024-12-14 06:44:44.447436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.698 [2024-12-14 06:44:44.563091] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:14:30.698 [2024-12-14 06:44:44.570120] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:14:30.698 [2024-12-14 06:44:44.570321] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:14:30.698 [2024-12-14 06:44:44.571137] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:30.698 [2024-12-14 06:44:44.571262] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:14:31.272 06:44:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.272 06:44:45 -- common/autotest_common.sh@862 -- # return 0 00:14:31.272 06:44:45 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:32.211 06:44:46 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:32.470 06:44:46 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:32.470 06:44:46 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:32.470 06:44:46 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.470 06:44:46 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:32.470 06:44:46 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:32.729 Malloc1 00:14:32.987 06:44:46 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:33.246 06:44:47 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:33.504 06:44:47 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:33.762 06:44:47 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.762 06:44:47 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:33.762 06:44:47 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:34.020 Malloc2 00:14:34.021 06:44:47 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:34.279 06:44:48 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:34.537 06:44:48 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:34.795 06:44:48 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:34.795 06:44:48 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71619 00:14:34.795 06:44:48 -- common/autotest_common.sh@936 -- # '[' -z 71619 ']' 00:14:34.795 06:44:48 -- common/autotest_common.sh@940 -- # kill -0 71619 00:14:34.795 06:44:48 -- common/autotest_common.sh@941 -- # uname 00:14:34.795 06:44:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.795 06:44:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71619 00:14:34.795 06:44:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.795 06:44:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.795 06:44:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71619' 00:14:34.795 killing process with pid 71619 00:14:34.795 06:44:48 -- common/autotest_common.sh@955 -- # kill 71619 00:14:34.795 06:44:48 -- common/autotest_common.sh@960 -- # wait 71619 00:14:35.362 06:44:49 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:35.362 06:44:49 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:35.362 00:14:35.362 real 0m56.417s 00:14:35.362 user 3m40.957s 00:14:35.362 sys 0m4.458s 00:14:35.362 06:44:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:35.362 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:35.362 ************************************ 00:14:35.362 END TEST nvmf_vfio_user 00:14:35.362 ************************************ 00:14:35.362 06:44:49 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:35.362 06:44:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.362 06:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.362 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:35.362 ************************************ 00:14:35.362 START TEST nvmf_vfio_user_nvme_compliance 00:14:35.362 ************************************ 00:14:35.362 06:44:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:35.362 * Looking for test storage... 00:14:35.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:14:35.362 06:44:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:35.362 06:44:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:35.362 06:44:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:35.362 06:44:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:35.362 06:44:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:35.362 06:44:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:35.362 06:44:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:35.362 06:44:49 -- scripts/common.sh@335 -- # IFS=.-: 00:14:35.362 06:44:49 -- scripts/common.sh@335 -- # read -ra ver1 00:14:35.362 06:44:49 -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.362 06:44:49 -- scripts/common.sh@336 -- # read -ra ver2 00:14:35.362 06:44:49 -- scripts/common.sh@337 -- # local 'op=<' 00:14:35.362 06:44:49 -- scripts/common.sh@339 -- # ver1_l=2 00:14:35.362 06:44:49 -- scripts/common.sh@340 -- # ver2_l=1 00:14:35.362 06:44:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:35.362 06:44:49 -- scripts/common.sh@343 -- # case "$op" in 00:14:35.362 06:44:49 -- scripts/common.sh@344 -- # : 1 00:14:35.362 06:44:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:35.362 06:44:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.362 06:44:49 -- scripts/common.sh@364 -- # decimal 1 00:14:35.362 06:44:49 -- scripts/common.sh@352 -- # local d=1 00:14:35.362 06:44:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.362 06:44:49 -- scripts/common.sh@354 -- # echo 1 00:14:35.362 06:44:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:35.362 06:44:49 -- scripts/common.sh@365 -- # decimal 2 00:14:35.362 06:44:49 -- scripts/common.sh@352 -- # local d=2 00:14:35.362 06:44:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.362 06:44:49 -- scripts/common.sh@354 -- # echo 2 00:14:35.362 06:44:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:35.362 06:44:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:35.362 06:44:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:35.362 06:44:49 -- scripts/common.sh@367 -- # return 0 00:14:35.362 06:44:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.362 06:44:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:35.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.363 --rc genhtml_branch_coverage=1 00:14:35.363 --rc genhtml_function_coverage=1 00:14:35.363 --rc genhtml_legend=1 00:14:35.363 --rc geninfo_all_blocks=1 00:14:35.363 --rc geninfo_unexecuted_blocks=1 00:14:35.363 00:14:35.363 ' 00:14:35.363 06:44:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:35.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.363 --rc genhtml_branch_coverage=1 00:14:35.363 --rc genhtml_function_coverage=1 00:14:35.363 --rc genhtml_legend=1 00:14:35.363 --rc geninfo_all_blocks=1 00:14:35.363 --rc geninfo_unexecuted_blocks=1 00:14:35.363 00:14:35.363 ' 00:14:35.363 06:44:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:35.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.363 --rc genhtml_branch_coverage=1 00:14:35.363 --rc genhtml_function_coverage=1 00:14:35.363 --rc genhtml_legend=1 00:14:35.363 --rc geninfo_all_blocks=1 00:14:35.363 --rc geninfo_unexecuted_blocks=1 00:14:35.363 00:14:35.363 ' 00:14:35.363 06:44:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:35.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.363 --rc genhtml_branch_coverage=1 00:14:35.363 --rc genhtml_function_coverage=1 00:14:35.363 --rc genhtml_legend=1 00:14:35.363 --rc geninfo_all_blocks=1 00:14:35.363 --rc geninfo_unexecuted_blocks=1 00:14:35.363 00:14:35.363 ' 00:14:35.363 06:44:49 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.363 06:44:49 -- nvmf/common.sh@7 -- # uname -s 00:14:35.363 06:44:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.363 06:44:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.363 06:44:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.363 06:44:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.363 06:44:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.363 06:44:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.363 06:44:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.363 06:44:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.363 06:44:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.363 06:44:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.363 06:44:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:35.363 06:44:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:35.363 06:44:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.363 06:44:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.622 06:44:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.622 06:44:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.622 06:44:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.622 06:44:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.622 06:44:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.622 06:44:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.622 06:44:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.622 06:44:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.622 06:44:49 -- paths/export.sh@5 -- # export PATH 00:14:35.622 06:44:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.622 06:44:49 -- nvmf/common.sh@46 -- # : 0 00:14:35.622 06:44:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:35.622 06:44:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:35.622 06:44:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:35.622 06:44:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.622 06:44:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.622 06:44:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:35.622 06:44:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:35.622 06:44:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:35.622 06:44:49 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.622 06:44:49 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.622 06:44:49 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:35.622 06:44:49 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:35.622 06:44:49 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:35.622 06:44:49 -- compliance/compliance.sh@20 -- # nvmfpid=71824 00:14:35.622 06:44:49 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71824' 00:14:35.622 06:44:49 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:35.622 Process pid: 71824 00:14:35.622 06:44:49 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:35.622 06:44:49 -- compliance/compliance.sh@24 -- # waitforlisten 71824 00:14:35.622 06:44:49 -- common/autotest_common.sh@829 -- # '[' -z 71824 ']' 00:14:35.622 06:44:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.622 06:44:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.622 06:44:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.622 06:44:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.622 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:35.622 [2024-12-14 06:44:49.418619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:35.622 [2024-12-14 06:44:49.418753] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.622 [2024-12-14 06:44:49.555236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:35.881 [2024-12-14 06:44:49.701596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.881 [2024-12-14 06:44:49.701823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.881 [2024-12-14 06:44:49.701837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.881 [2024-12-14 06:44:49.701846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.881 [2024-12-14 06:44:49.702041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.881 [2024-12-14 06:44:49.702192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.881 [2024-12-14 06:44:49.702199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.448 06:44:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.448 06:44:50 -- common/autotest_common.sh@862 -- # return 0 00:14:36.448 06:44:50 -- compliance/compliance.sh@26 -- # sleep 1 00:14:37.825 06:44:51 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:37.825 06:44:51 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:37.825 06:44:51 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:37.825 06:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.825 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.825 06:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.825 06:44:51 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:37.825 06:44:51 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:37.825 06:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.825 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.825 malloc0 00:14:37.825 06:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.825 06:44:51 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:37.825 06:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.825 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.825 06:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.825 06:44:51 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:37.825 06:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.825 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.825 06:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.825 06:44:51 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:37.825 06:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.825 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.825 06:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.825 06:44:51 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:37.825 00:14:37.825 00:14:37.825 CUnit - A unit testing framework for C - Version 2.1-3 00:14:37.825 http://cunit.sourceforge.net/ 00:14:37.825 00:14:37.825 00:14:37.825 Suite: nvme_compliance 00:14:37.825 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-14 06:44:51.742032] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:37.825 [2024-12-14 06:44:51.742088] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:37.825 [2024-12-14 06:44:51.742116] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:37.825 passed 00:14:38.084 Test: admin_identify_ctrlr_verify_fused ...passed 00:14:38.084 Test: admin_identify_ns ...[2024-12-14 06:44:51.986999] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:38.084 [2024-12-14 06:44:51.994990] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:38.084 passed 00:14:38.343 Test: admin_get_features_mandatory_features ...passed 00:14:38.343 Test: admin_get_features_optional_features ...passed 00:14:38.601 Test: admin_set_features_number_of_queues ...passed 00:14:38.601 Test: admin_get_log_page_mandatory_logs ...passed 00:14:38.860 Test: admin_get_log_page_with_lpo ...[2024-12-14 06:44:52.635966] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:38.860 passed 00:14:38.860 Test: fabric_property_get ...passed 00:14:38.860 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-14 06:44:52.827581] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:39.119 passed 00:14:39.119 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-14 06:44:52.998985] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:39.119 [2024-12-14 06:44:53.014973] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:39.119 passed 00:14:39.119 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-14 06:44:53.107731] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:39.378 passed 00:14:39.378 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-14 06:44:53.274974] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:39.378 [2024-12-14 06:44:53.298957] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:39.378 passed 00:14:39.636 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-14 06:44:53.394377] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:39.636 [2024-12-14 06:44:53.394461] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:39.636 passed 00:14:39.636 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-14 06:44:53.574989] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:39.636 [2024-12-14 06:44:53.582966] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:39.636 [2024-12-14 06:44:53.590984] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:39.636 [2024-12-14 06:44:53.598972] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:39.895 passed 00:14:39.895 Test: admin_create_io_sq_verify_pc ...[2024-12-14 06:44:53.732970] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:39.895 passed 00:14:41.273 Test: admin_create_io_qp_max_qps ...[2024-12-14 06:44:54.883996] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:41.535 passed 00:14:41.535 Test: admin_create_io_sq_shared_cq ...[2024-12-14 06:44:55.486958] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:41.793 passed 00:14:41.793 00:14:41.793 Run Summary: Type Total Ran Passed Failed Inactive 00:14:41.793 suites 1 1 n/a 0 0 00:14:41.793 tests 18 18 18 0 0 00:14:41.793 asserts 360 360 360 0 n/a 00:14:41.793 00:14:41.793 Elapsed time = 1.569 seconds 00:14:41.793 06:44:55 -- compliance/compliance.sh@42 -- # killprocess 71824 00:14:41.793 06:44:55 -- common/autotest_common.sh@936 -- # '[' -z 71824 ']' 00:14:41.793 06:44:55 -- common/autotest_common.sh@940 -- # kill -0 71824 00:14:41.793 06:44:55 -- common/autotest_common.sh@941 -- # uname 00:14:41.793 06:44:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.793 06:44:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71824 00:14:41.793 06:44:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:41.793 06:44:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:41.793 killing process with pid 71824 00:14:41.793 06:44:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71824' 00:14:41.793 06:44:55 -- common/autotest_common.sh@955 -- # kill 71824 00:14:41.793 06:44:55 -- common/autotest_common.sh@960 -- # wait 71824 00:14:42.052 06:44:55 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:42.052 06:44:55 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:42.052 00:14:42.052 real 0m6.824s 00:14:42.052 user 0m18.856s 00:14:42.052 sys 0m0.622s 00:14:42.052 06:44:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:42.052 ************************************ 00:14:42.052 END TEST nvmf_vfio_user_nvme_compliance 00:14:42.052 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.052 ************************************ 00:14:42.052 06:44:56 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:42.052 06:44:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:42.052 06:44:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.052 06:44:56 -- common/autotest_common.sh@10 -- # set +x 00:14:42.052 ************************************ 00:14:42.052 START TEST nvmf_vfio_user_fuzz 00:14:42.052 ************************************ 00:14:42.052 06:44:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:42.312 * Looking for test storage... 00:14:42.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.312 06:44:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:42.312 06:44:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:42.312 06:44:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:42.312 06:44:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:42.312 06:44:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:42.312 06:44:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:42.312 06:44:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:42.312 06:44:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:42.312 06:44:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:42.312 06:44:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.312 06:44:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:42.312 06:44:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:42.312 06:44:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:42.312 06:44:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:42.312 06:44:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:42.312 06:44:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:42.312 06:44:56 -- scripts/common.sh@344 -- # : 1 00:14:42.312 06:44:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:42.312 06:44:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.312 06:44:56 -- scripts/common.sh@364 -- # decimal 1 00:14:42.312 06:44:56 -- scripts/common.sh@352 -- # local d=1 00:14:42.312 06:44:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.312 06:44:56 -- scripts/common.sh@354 -- # echo 1 00:14:42.312 06:44:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:42.312 06:44:56 -- scripts/common.sh@365 -- # decimal 2 00:14:42.312 06:44:56 -- scripts/common.sh@352 -- # local d=2 00:14:42.312 06:44:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.312 06:44:56 -- scripts/common.sh@354 -- # echo 2 00:14:42.312 06:44:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:42.312 06:44:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:42.312 06:44:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:42.312 06:44:56 -- scripts/common.sh@367 -- # return 0 00:14:42.312 06:44:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.312 06:44:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:42.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.312 --rc genhtml_branch_coverage=1 00:14:42.312 --rc genhtml_function_coverage=1 00:14:42.312 --rc genhtml_legend=1 00:14:42.312 --rc geninfo_all_blocks=1 00:14:42.312 --rc geninfo_unexecuted_blocks=1 00:14:42.312 00:14:42.312 ' 00:14:42.312 06:44:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:42.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.312 --rc genhtml_branch_coverage=1 00:14:42.312 --rc genhtml_function_coverage=1 00:14:42.312 --rc genhtml_legend=1 00:14:42.312 --rc geninfo_all_blocks=1 00:14:42.312 --rc geninfo_unexecuted_blocks=1 00:14:42.312 00:14:42.312 ' 00:14:42.312 06:44:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:42.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.312 --rc genhtml_branch_coverage=1 00:14:42.312 --rc genhtml_function_coverage=1 00:14:42.312 --rc genhtml_legend=1 00:14:42.312 --rc geninfo_all_blocks=1 00:14:42.312 --rc geninfo_unexecuted_blocks=1 00:14:42.312 00:14:42.312 ' 00:14:42.312 06:44:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:42.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.312 --rc genhtml_branch_coverage=1 00:14:42.312 --rc genhtml_function_coverage=1 00:14:42.312 --rc genhtml_legend=1 00:14:42.312 --rc geninfo_all_blocks=1 00:14:42.312 --rc geninfo_unexecuted_blocks=1 00:14:42.312 00:14:42.312 ' 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.312 06:44:56 -- nvmf/common.sh@7 -- # uname -s 00:14:42.312 06:44:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.312 06:44:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.312 06:44:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.312 06:44:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.312 06:44:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.312 06:44:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.312 06:44:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.312 06:44:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.312 06:44:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.312 06:44:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.312 06:44:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:42.312 06:44:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:42.312 06:44:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.312 06:44:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.312 06:44:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.312 06:44:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.312 06:44:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.312 06:44:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.312 06:44:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.312 06:44:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.312 06:44:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.312 06:44:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.312 06:44:56 -- paths/export.sh@5 -- # export PATH 00:14:42.312 06:44:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.312 06:44:56 -- nvmf/common.sh@46 -- # : 0 00:14:42.312 06:44:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.312 06:44:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.312 06:44:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.312 06:44:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.312 06:44:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.312 06:44:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.312 06:44:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.312 06:44:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71979 00:14:42.312 Process pid: 71979 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71979' 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:42.312 06:44:56 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71979 00:14:42.312 06:44:56 -- common/autotest_common.sh@829 -- # '[' -z 71979 ']' 00:14:42.312 06:44:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.312 06:44:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.312 06:44:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.312 06:44:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.312 06:44:56 -- common/autotest_common.sh@10 -- # set +x 00:14:43.686 06:44:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.686 06:44:57 -- common/autotest_common.sh@862 -- # return 0 00:14:43.686 06:44:57 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:44.621 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.621 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.621 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:44.621 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.621 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.621 malloc0 00:14:44.621 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:44.621 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.621 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.621 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:44.621 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.621 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.621 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:44.621 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.621 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.621 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:44.621 06:44:58 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:45.188 Shutting down the fuzz application 00:14:45.188 06:44:58 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:45.188 06:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.188 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:45.188 06:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.188 06:44:58 -- target/vfio_user_fuzz.sh@46 -- # killprocess 71979 00:14:45.188 06:44:58 -- common/autotest_common.sh@936 -- # '[' -z 71979 ']' 00:14:45.188 06:44:58 -- common/autotest_common.sh@940 -- # kill -0 71979 00:14:45.188 06:44:58 -- common/autotest_common.sh@941 -- # uname 00:14:45.188 06:44:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.188 06:44:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71979 00:14:45.188 06:44:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.188 06:44:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.188 killing process with pid 71979 00:14:45.188 06:44:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71979' 00:14:45.188 06:44:58 -- common/autotest_common.sh@955 -- # kill 71979 00:14:45.188 06:44:58 -- common/autotest_common.sh@960 -- # wait 71979 00:14:45.446 06:44:59 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:45.446 06:44:59 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:45.446 00:14:45.446 real 0m3.265s 00:14:45.446 user 0m3.628s 00:14:45.446 sys 0m0.509s 00:14:45.446 06:44:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:45.446 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.446 ************************************ 00:14:45.446 END TEST nvmf_vfio_user_fuzz 00:14:45.446 ************************************ 00:14:45.446 06:44:59 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:45.446 06:44:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.446 06:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.446 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.446 ************************************ 00:14:45.446 START TEST nvmf_host_management 00:14:45.446 ************************************ 00:14:45.446 06:44:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:45.446 * Looking for test storage... 00:14:45.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.446 06:44:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:45.706 06:44:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:45.706 06:44:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:45.706 06:44:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:45.706 06:44:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:45.706 06:44:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:45.706 06:44:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:45.706 06:44:59 -- scripts/common.sh@335 -- # IFS=.-: 00:14:45.706 06:44:59 -- scripts/common.sh@335 -- # read -ra ver1 00:14:45.706 06:44:59 -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.706 06:44:59 -- scripts/common.sh@336 -- # read -ra ver2 00:14:45.706 06:44:59 -- scripts/common.sh@337 -- # local 'op=<' 00:14:45.706 06:44:59 -- scripts/common.sh@339 -- # ver1_l=2 00:14:45.706 06:44:59 -- scripts/common.sh@340 -- # ver2_l=1 00:14:45.706 06:44:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:45.706 06:44:59 -- scripts/common.sh@343 -- # case "$op" in 00:14:45.706 06:44:59 -- scripts/common.sh@344 -- # : 1 00:14:45.706 06:44:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:45.706 06:44:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.706 06:44:59 -- scripts/common.sh@364 -- # decimal 1 00:14:45.706 06:44:59 -- scripts/common.sh@352 -- # local d=1 00:14:45.706 06:44:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.706 06:44:59 -- scripts/common.sh@354 -- # echo 1 00:14:45.706 06:44:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:45.706 06:44:59 -- scripts/common.sh@365 -- # decimal 2 00:14:45.706 06:44:59 -- scripts/common.sh@352 -- # local d=2 00:14:45.706 06:44:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.706 06:44:59 -- scripts/common.sh@354 -- # echo 2 00:14:45.706 06:44:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:45.706 06:44:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:45.706 06:44:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:45.706 06:44:59 -- scripts/common.sh@367 -- # return 0 00:14:45.706 06:44:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.706 06:44:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:45.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.706 --rc genhtml_branch_coverage=1 00:14:45.706 --rc genhtml_function_coverage=1 00:14:45.706 --rc genhtml_legend=1 00:14:45.706 --rc geninfo_all_blocks=1 00:14:45.706 --rc geninfo_unexecuted_blocks=1 00:14:45.706 00:14:45.706 ' 00:14:45.706 06:44:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:45.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.706 --rc genhtml_branch_coverage=1 00:14:45.706 --rc genhtml_function_coverage=1 00:14:45.706 --rc genhtml_legend=1 00:14:45.706 --rc geninfo_all_blocks=1 00:14:45.706 --rc geninfo_unexecuted_blocks=1 00:14:45.706 00:14:45.706 ' 00:14:45.706 06:44:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:45.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.706 --rc genhtml_branch_coverage=1 00:14:45.706 --rc genhtml_function_coverage=1 00:14:45.706 --rc genhtml_legend=1 00:14:45.706 --rc geninfo_all_blocks=1 00:14:45.706 --rc geninfo_unexecuted_blocks=1 00:14:45.706 00:14:45.706 ' 00:14:45.706 06:44:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:45.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.706 --rc genhtml_branch_coverage=1 00:14:45.706 --rc genhtml_function_coverage=1 00:14:45.706 --rc genhtml_legend=1 00:14:45.706 --rc geninfo_all_blocks=1 00:14:45.706 --rc geninfo_unexecuted_blocks=1 00:14:45.706 00:14:45.706 ' 00:14:45.706 06:44:59 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.706 06:44:59 -- nvmf/common.sh@7 -- # uname -s 00:14:45.706 06:44:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.706 06:44:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.706 06:44:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.706 06:44:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.706 06:44:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.706 06:44:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.706 06:44:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.706 06:44:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.706 06:44:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.706 06:44:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:45.706 06:44:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:45.706 06:44:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.706 06:44:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.706 06:44:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.706 06:44:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.706 06:44:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.706 06:44:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.706 06:44:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.706 06:44:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.706 06:44:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.706 06:44:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.706 06:44:59 -- paths/export.sh@5 -- # export PATH 00:14:45.706 06:44:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.706 06:44:59 -- nvmf/common.sh@46 -- # : 0 00:14:45.706 06:44:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:45.706 06:44:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:45.706 06:44:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:45.706 06:44:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.706 06:44:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.706 06:44:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:45.706 06:44:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:45.706 06:44:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:45.706 06:44:59 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.706 06:44:59 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.706 06:44:59 -- target/host_management.sh@104 -- # nvmftestinit 00:14:45.706 06:44:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:45.706 06:44:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.706 06:44:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:45.706 06:44:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:45.706 06:44:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:45.706 06:44:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.706 06:44:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.706 06:44:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.706 06:44:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:45.706 06:44:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:45.706 06:44:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.706 06:44:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.706 06:44:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.706 06:44:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:45.706 06:44:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.706 06:44:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.706 06:44:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.706 06:44:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.706 06:44:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.706 06:44:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.706 06:44:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.706 06:44:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.706 06:44:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:45.706 06:44:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:45.706 Cannot find device "nvmf_tgt_br" 00:14:45.706 06:44:59 -- nvmf/common.sh@154 -- # true 00:14:45.706 06:44:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.706 Cannot find device "nvmf_tgt_br2" 00:14:45.706 06:44:59 -- nvmf/common.sh@155 -- # true 00:14:45.706 06:44:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:45.706 06:44:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:45.706 Cannot find device "nvmf_tgt_br" 00:14:45.706 06:44:59 -- nvmf/common.sh@157 -- # true 00:14:45.706 06:44:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:45.706 Cannot find device "nvmf_tgt_br2" 00:14:45.706 06:44:59 -- nvmf/common.sh@158 -- # true 00:14:45.706 06:44:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:45.707 06:44:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:45.965 06:44:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.965 06:44:59 -- nvmf/common.sh@161 -- # true 00:14:45.965 06:44:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.965 06:44:59 -- nvmf/common.sh@162 -- # true 00:14:45.965 06:44:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.965 06:44:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.965 06:44:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.965 06:44:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.965 06:44:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.965 06:44:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.965 06:44:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.965 06:44:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.965 06:44:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.965 06:44:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:45.965 06:44:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:45.965 06:44:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:45.965 06:44:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:45.965 06:44:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.965 06:44:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.965 06:44:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.965 06:44:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:45.965 06:44:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:45.965 06:44:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.965 06:44:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.965 06:44:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.965 06:44:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.965 06:44:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.965 06:44:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:45.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:14:45.965 00:14:45.965 --- 10.0.0.2 ping statistics --- 00:14:45.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.965 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:45.965 06:44:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:45.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:45.965 00:14:45.965 --- 10.0.0.3 ping statistics --- 00:14:45.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.965 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:45.965 06:44:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:45.965 00:14:45.965 --- 10.0.0.1 ping statistics --- 00:14:45.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.965 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:45.965 06:44:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.965 06:44:59 -- nvmf/common.sh@421 -- # return 0 00:14:45.965 06:44:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.965 06:44:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.965 06:44:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.965 06:44:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.965 06:44:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.965 06:44:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.965 06:44:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:46.224 06:44:59 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:46.224 06:44:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.224 06:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.224 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.224 ************************************ 00:14:46.224 START TEST nvmf_host_management 00:14:46.224 ************************************ 00:14:46.224 06:44:59 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:46.224 06:44:59 -- target/host_management.sh@69 -- # starttarget 00:14:46.224 06:44:59 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:46.224 06:44:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:46.224 06:44:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.224 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.224 06:44:59 -- nvmf/common.sh@469 -- # nvmfpid=72219 00:14:46.224 06:44:59 -- nvmf/common.sh@470 -- # waitforlisten 72219 00:14:46.224 06:44:59 -- common/autotest_common.sh@829 -- # '[' -z 72219 ']' 00:14:46.224 06:44:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.224 06:44:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.224 06:44:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.224 06:44:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.224 06:44:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.224 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.224 [2024-12-14 06:45:00.049488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:46.224 [2024-12-14 06:45:00.049614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.224 [2024-12-14 06:45:00.193821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.483 [2024-12-14 06:45:00.359518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:46.483 [2024-12-14 06:45:00.360031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.483 [2024-12-14 06:45:00.360185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.483 [2024-12-14 06:45:00.360326] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.483 [2024-12-14 06:45:00.360759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.483 [2024-12-14 06:45:00.361090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.483 [2024-12-14 06:45:00.361177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:46.483 [2024-12-14 06:45:00.361188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.418 06:45:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.418 06:45:01 -- common/autotest_common.sh@862 -- # return 0 00:14:47.418 06:45:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:47.418 06:45:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 06:45:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.418 06:45:01 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.418 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 [2024-12-14 06:45:01.138548] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.418 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.418 06:45:01 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:47.418 06:45:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 06:45:01 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:47.418 06:45:01 -- target/host_management.sh@23 -- # cat 00:14:47.418 06:45:01 -- target/host_management.sh@30 -- # rpc_cmd 00:14:47.418 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 Malloc0 00:14:47.418 [2024-12-14 06:45:01.227222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.418 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.418 06:45:01 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:47.418 06:45:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.418 06:45:01 -- target/host_management.sh@73 -- # perfpid=72295 00:14:47.418 06:45:01 -- target/host_management.sh@74 -- # waitforlisten 72295 /var/tmp/bdevperf.sock 00:14:47.418 06:45:01 -- common/autotest_common.sh@829 -- # '[' -z 72295 ']' 00:14:47.418 06:45:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.418 06:45:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.418 06:45:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.418 06:45:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.418 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 06:45:01 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:47.418 06:45:01 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:47.418 06:45:01 -- nvmf/common.sh@520 -- # config=() 00:14:47.418 06:45:01 -- nvmf/common.sh@520 -- # local subsystem config 00:14:47.418 06:45:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:47.418 06:45:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:47.418 { 00:14:47.418 "params": { 00:14:47.418 "name": "Nvme$subsystem", 00:14:47.418 "trtype": "$TEST_TRANSPORT", 00:14:47.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:47.418 "adrfam": "ipv4", 00:14:47.418 "trsvcid": "$NVMF_PORT", 00:14:47.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:47.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:47.418 "hdgst": ${hdgst:-false}, 00:14:47.418 "ddgst": ${ddgst:-false} 00:14:47.418 }, 00:14:47.418 "method": "bdev_nvme_attach_controller" 00:14:47.418 } 00:14:47.418 EOF 00:14:47.418 )") 00:14:47.418 06:45:01 -- nvmf/common.sh@542 -- # cat 00:14:47.418 06:45:01 -- nvmf/common.sh@544 -- # jq . 00:14:47.418 06:45:01 -- nvmf/common.sh@545 -- # IFS=, 00:14:47.418 06:45:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:47.418 "params": { 00:14:47.418 "name": "Nvme0", 00:14:47.418 "trtype": "tcp", 00:14:47.418 "traddr": "10.0.0.2", 00:14:47.418 "adrfam": "ipv4", 00:14:47.418 "trsvcid": "4420", 00:14:47.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:47.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:47.418 "hdgst": false, 00:14:47.418 "ddgst": false 00:14:47.418 }, 00:14:47.418 "method": "bdev_nvme_attach_controller" 00:14:47.418 }' 00:14:47.418 [2024-12-14 06:45:01.329686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.418 [2024-12-14 06:45:01.329800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72295 ] 00:14:47.676 [2024-12-14 06:45:01.470999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.677 [2024-12-14 06:45:01.631479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.935 Running I/O for 10 seconds... 00:14:48.504 06:45:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.504 06:45:02 -- common/autotest_common.sh@862 -- # return 0 00:14:48.504 06:45:02 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:48.504 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.504 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.504 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.504 06:45:02 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.504 06:45:02 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:48.504 06:45:02 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:48.504 06:45:02 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:48.504 06:45:02 -- target/host_management.sh@52 -- # local ret=1 00:14:48.504 06:45:02 -- target/host_management.sh@53 -- # local i 00:14:48.504 06:45:02 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:48.504 06:45:02 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:48.504 06:45:02 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:48.504 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.504 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.504 06:45:02 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:48.504 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.504 06:45:02 -- target/host_management.sh@55 -- # read_io_count=1788 00:14:48.504 06:45:02 -- target/host_management.sh@58 -- # '[' 1788 -ge 100 ']' 00:14:48.504 06:45:02 -- target/host_management.sh@59 -- # ret=0 00:14:48.504 06:45:02 -- target/host_management.sh@60 -- # break 00:14:48.504 06:45:02 -- target/host_management.sh@64 -- # return 0 00:14:48.504 06:45:02 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:48.504 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.504 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.504 [2024-12-14 06:45:02.436893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.504 [2024-12-14 06:45:02.437120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a910 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.437804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.505 [2024-12-14 06:45:02.437848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.437862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.505 [2024-12-14 06:45:02.437873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.437883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.505 [2024-12-14 06:45:02.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.437904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.505 [2024-12-14 06:45:02.437914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.437924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280dc0 is same with the state(5) to be set 00:14:48.505 [2024-12-14 06:45:02.440018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.505 [2024-12-14 06:45:02.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.505 [2024-12-14 06:45:02.440397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.440984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.440993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.506 [2024-12-14 06:45:02.441231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.506 [2024-12-14 06:45:02.441240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.507 [2024-12-14 06:45:02.441392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.507 [2024-12-14 06:45:02.441489] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1254400 was disconnected and freed. reset controller. 00:14:48.507 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.507 06:45:02 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:48.507 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.507 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.507 [2024-12-14 06:45:02.442645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:48.507 task offset: 124672 on job bdev=Nvme0n1 fails 00:14:48.507 00:14:48.507 Latency(us) 00:14:48.507 [2024-12-14T06:45:02.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.507 [2024-12-14T06:45:02.499Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:48.507 [2024-12-14T06:45:02.499Z] Job: Nvme0n1 ended in about 0.60 seconds with error 00:14:48.507 Verification LBA range: start 0x0 length 0x400 00:14:48.507 Nvme0n1 : 0.60 3287.13 205.45 106.47 0.00 18518.00 1891.61 24307.90 00:14:48.507 [2024-12-14T06:45:02.499Z] =================================================================================================================== 00:14:48.507 [2024-12-14T06:45:02.499Z] Total : 3287.13 205.45 106.47 0.00 18518.00 1891.61 24307.90 00:14:48.507 [2024-12-14 06:45:02.444718] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:48.507 [2024-12-14 06:45:02.444749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1280dc0 (9): Bad file descriptor 00:14:48.507 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.507 06:45:02 -- target/host_management.sh@87 -- # sleep 1 00:14:48.507 [2024-12-14 06:45:02.450085] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:49.882 06:45:03 -- target/host_management.sh@91 -- # kill -9 72295 00:14:49.882 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72295) - No such process 00:14:49.882 06:45:03 -- target/host_management.sh@91 -- # true 00:14:49.882 06:45:03 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:49.882 06:45:03 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:49.882 06:45:03 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:49.882 06:45:03 -- nvmf/common.sh@520 -- # config=() 00:14:49.882 06:45:03 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.882 06:45:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.882 06:45:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.882 { 00:14:49.882 "params": { 00:14:49.882 "name": "Nvme$subsystem", 00:14:49.882 "trtype": "$TEST_TRANSPORT", 00:14:49.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.882 "adrfam": "ipv4", 00:14:49.882 "trsvcid": "$NVMF_PORT", 00:14:49.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.882 "hdgst": ${hdgst:-false}, 00:14:49.882 "ddgst": ${ddgst:-false} 00:14:49.882 }, 00:14:49.882 "method": "bdev_nvme_attach_controller" 00:14:49.882 } 00:14:49.882 EOF 00:14:49.882 )") 00:14:49.882 06:45:03 -- nvmf/common.sh@542 -- # cat 00:14:49.882 06:45:03 -- nvmf/common.sh@544 -- # jq . 00:14:49.882 06:45:03 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.882 06:45:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.882 "params": { 00:14:49.882 "name": "Nvme0", 00:14:49.882 "trtype": "tcp", 00:14:49.882 "traddr": "10.0.0.2", 00:14:49.882 "adrfam": "ipv4", 00:14:49.882 "trsvcid": "4420", 00:14:49.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:49.882 "hdgst": false, 00:14:49.882 "ddgst": false 00:14:49.882 }, 00:14:49.882 "method": "bdev_nvme_attach_controller" 00:14:49.882 }' 00:14:49.882 [2024-12-14 06:45:03.510451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:49.882 [2024-12-14 06:45:03.510562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72341 ] 00:14:49.882 [2024-12-14 06:45:03.646141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.882 [2024-12-14 06:45:03.798460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.141 Running I/O for 1 seconds... 00:14:51.076 00:14:51.076 Latency(us) 00:14:51.076 [2024-12-14T06:45:05.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.076 [2024-12-14T06:45:05.068Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:51.076 Verification LBA range: start 0x0 length 0x400 00:14:51.076 Nvme0n1 : 1.01 3409.40 213.09 0.00 0.00 18432.51 1712.87 26571.87 00:14:51.076 [2024-12-14T06:45:05.068Z] =================================================================================================================== 00:14:51.076 [2024-12-14T06:45:05.068Z] Total : 3409.40 213.09 0.00 0.00 18432.51 1712.87 26571.87 00:14:51.644 06:45:05 -- target/host_management.sh@101 -- # stoptarget 00:14:51.644 06:45:05 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:51.644 06:45:05 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:51.644 06:45:05 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:51.644 06:45:05 -- target/host_management.sh@40 -- # nvmftestfini 00:14:51.644 06:45:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.644 06:45:05 -- nvmf/common.sh@116 -- # sync 00:14:51.644 06:45:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.644 06:45:05 -- nvmf/common.sh@119 -- # set +e 00:14:51.644 06:45:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.644 06:45:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.644 rmmod nvme_tcp 00:14:51.644 rmmod nvme_fabrics 00:14:51.644 rmmod nvme_keyring 00:14:51.644 06:45:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.644 06:45:05 -- nvmf/common.sh@123 -- # set -e 00:14:51.644 06:45:05 -- nvmf/common.sh@124 -- # return 0 00:14:51.644 06:45:05 -- nvmf/common.sh@477 -- # '[' -n 72219 ']' 00:14:51.644 06:45:05 -- nvmf/common.sh@478 -- # killprocess 72219 00:14:51.644 06:45:05 -- common/autotest_common.sh@936 -- # '[' -z 72219 ']' 00:14:51.644 06:45:05 -- common/autotest_common.sh@940 -- # kill -0 72219 00:14:51.644 06:45:05 -- common/autotest_common.sh@941 -- # uname 00:14:51.644 06:45:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.644 06:45:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72219 00:14:51.644 06:45:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:51.644 06:45:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:51.644 killing process with pid 72219 00:14:51.644 06:45:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72219' 00:14:51.644 06:45:05 -- common/autotest_common.sh@955 -- # kill 72219 00:14:51.644 06:45:05 -- common/autotest_common.sh@960 -- # wait 72219 00:14:51.913 [2024-12-14 06:45:05.860534] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:51.913 06:45:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.913 06:45:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.913 06:45:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.913 06:45:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.913 06:45:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.913 06:45:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.913 06:45:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.913 06:45:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.185 06:45:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:52.185 00:14:52.185 real 0m5.958s 00:14:52.185 user 0m24.632s 00:14:52.185 sys 0m1.467s 00:14:52.185 06:45:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.186 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:52.186 ************************************ 00:14:52.186 END TEST nvmf_host_management 00:14:52.186 ************************************ 00:14:52.186 06:45:05 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:52.186 00:14:52.186 real 0m6.620s 00:14:52.186 user 0m24.824s 00:14:52.186 sys 0m1.765s 00:14:52.186 06:45:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.186 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:52.186 ************************************ 00:14:52.186 END TEST nvmf_host_management 00:14:52.186 ************************************ 00:14:52.186 06:45:06 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:52.186 06:45:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.186 06:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.186 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:52.186 ************************************ 00:14:52.186 START TEST nvmf_lvol 00:14:52.186 ************************************ 00:14:52.186 06:45:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:52.186 * Looking for test storage... 00:14:52.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.186 06:45:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.186 06:45:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.186 06:45:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.445 06:45:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.445 06:45:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.445 06:45:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.445 06:45:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.445 06:45:06 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.445 06:45:06 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.445 06:45:06 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.445 06:45:06 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.445 06:45:06 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.445 06:45:06 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.445 06:45:06 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.445 06:45:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.445 06:45:06 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.445 06:45:06 -- scripts/common.sh@344 -- # : 1 00:14:52.445 06:45:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.445 06:45:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.445 06:45:06 -- scripts/common.sh@364 -- # decimal 1 00:14:52.445 06:45:06 -- scripts/common.sh@352 -- # local d=1 00:14:52.445 06:45:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.445 06:45:06 -- scripts/common.sh@354 -- # echo 1 00:14:52.445 06:45:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.445 06:45:06 -- scripts/common.sh@365 -- # decimal 2 00:14:52.445 06:45:06 -- scripts/common.sh@352 -- # local d=2 00:14:52.445 06:45:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.445 06:45:06 -- scripts/common.sh@354 -- # echo 2 00:14:52.445 06:45:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.445 06:45:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.445 06:45:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.445 06:45:06 -- scripts/common.sh@367 -- # return 0 00:14:52.445 06:45:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.445 06:45:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.445 --rc genhtml_branch_coverage=1 00:14:52.445 --rc genhtml_function_coverage=1 00:14:52.445 --rc genhtml_legend=1 00:14:52.445 --rc geninfo_all_blocks=1 00:14:52.445 --rc geninfo_unexecuted_blocks=1 00:14:52.445 00:14:52.445 ' 00:14:52.445 06:45:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.445 --rc genhtml_branch_coverage=1 00:14:52.445 --rc genhtml_function_coverage=1 00:14:52.445 --rc genhtml_legend=1 00:14:52.445 --rc geninfo_all_blocks=1 00:14:52.445 --rc geninfo_unexecuted_blocks=1 00:14:52.445 00:14:52.445 ' 00:14:52.445 06:45:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.445 --rc genhtml_branch_coverage=1 00:14:52.445 --rc genhtml_function_coverage=1 00:14:52.445 --rc genhtml_legend=1 00:14:52.445 --rc geninfo_all_blocks=1 00:14:52.445 --rc geninfo_unexecuted_blocks=1 00:14:52.445 00:14:52.445 ' 00:14:52.445 06:45:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.445 --rc genhtml_branch_coverage=1 00:14:52.445 --rc genhtml_function_coverage=1 00:14:52.445 --rc genhtml_legend=1 00:14:52.445 --rc geninfo_all_blocks=1 00:14:52.445 --rc geninfo_unexecuted_blocks=1 00:14:52.445 00:14:52.445 ' 00:14:52.445 06:45:06 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.445 06:45:06 -- nvmf/common.sh@7 -- # uname -s 00:14:52.445 06:45:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.445 06:45:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.445 06:45:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.445 06:45:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.445 06:45:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.445 06:45:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.445 06:45:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.445 06:45:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.445 06:45:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.445 06:45:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.445 06:45:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:52.445 06:45:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:14:52.445 06:45:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.445 06:45:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.445 06:45:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.445 06:45:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.445 06:45:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.445 06:45:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.445 06:45:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.445 06:45:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.446 06:45:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.446 06:45:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.446 06:45:06 -- paths/export.sh@5 -- # export PATH 00:14:52.446 06:45:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.446 06:45:06 -- nvmf/common.sh@46 -- # : 0 00:14:52.446 06:45:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.446 06:45:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.446 06:45:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.446 06:45:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.446 06:45:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.446 06:45:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.446 06:45:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.446 06:45:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.446 06:45:06 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:52.446 06:45:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.446 06:45:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.446 06:45:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.446 06:45:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.446 06:45:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.446 06:45:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.446 06:45:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.446 06:45:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.446 06:45:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:52.446 06:45:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:52.446 06:45:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:52.446 06:45:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:52.446 06:45:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:52.446 06:45:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:52.446 06:45:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.446 06:45:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.446 06:45:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.446 06:45:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:52.446 06:45:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.446 06:45:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.446 06:45:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.446 06:45:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.446 06:45:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.446 06:45:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.446 06:45:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.446 06:45:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.446 06:45:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:52.446 06:45:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:52.446 Cannot find device "nvmf_tgt_br" 00:14:52.446 06:45:06 -- nvmf/common.sh@154 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.446 Cannot find device "nvmf_tgt_br2" 00:14:52.446 06:45:06 -- nvmf/common.sh@155 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:52.446 06:45:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:52.446 Cannot find device "nvmf_tgt_br" 00:14:52.446 06:45:06 -- nvmf/common.sh@157 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:52.446 Cannot find device "nvmf_tgt_br2" 00:14:52.446 06:45:06 -- nvmf/common.sh@158 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:52.446 06:45:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:52.446 06:45:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.446 06:45:06 -- nvmf/common.sh@161 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.446 06:45:06 -- nvmf/common.sh@162 -- # true 00:14:52.446 06:45:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.446 06:45:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.446 06:45:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.446 06:45:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.446 06:45:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.446 06:45:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.446 06:45:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.446 06:45:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.705 06:45:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.705 06:45:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:52.705 06:45:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:52.705 06:45:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:52.705 06:45:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:52.705 06:45:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.705 06:45:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.705 06:45:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.705 06:45:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:52.705 06:45:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:52.705 06:45:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.705 06:45:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.705 06:45:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.705 06:45:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.705 06:45:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.705 06:45:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:52.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:52.705 00:14:52.705 --- 10.0.0.2 ping statistics --- 00:14:52.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.705 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:52.705 06:45:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:52.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:52.705 00:14:52.705 --- 10.0.0.3 ping statistics --- 00:14:52.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.705 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:52.705 06:45:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:52.705 00:14:52.705 --- 10.0.0.1 ping statistics --- 00:14:52.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.705 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:52.705 06:45:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.705 06:45:06 -- nvmf/common.sh@421 -- # return 0 00:14:52.705 06:45:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:52.705 06:45:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.705 06:45:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:52.705 06:45:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:52.705 06:45:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.705 06:45:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:52.705 06:45:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.705 06:45:06 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:52.705 06:45:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.705 06:45:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.705 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:52.705 06:45:06 -- nvmf/common.sh@469 -- # nvmfpid=72579 00:14:52.705 06:45:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:52.705 06:45:06 -- nvmf/common.sh@470 -- # waitforlisten 72579 00:14:52.705 06:45:06 -- common/autotest_common.sh@829 -- # '[' -z 72579 ']' 00:14:52.705 06:45:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.705 06:45:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.705 06:45:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.705 06:45:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.705 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:52.705 [2024-12-14 06:45:06.647889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:52.705 [2024-12-14 06:45:06.648047] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.964 [2024-12-14 06:45:06.790868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.964 [2024-12-14 06:45:06.953711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.964 [2024-12-14 06:45:06.953925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.964 [2024-12-14 06:45:06.953961] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.964 [2024-12-14 06:45:06.953976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.964 [2024-12-14 06:45:06.954092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.964 [2024-12-14 06:45:06.954498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.964 [2024-12-14 06:45:06.954547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.899 06:45:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.899 06:45:07 -- common/autotest_common.sh@862 -- # return 0 00:14:53.899 06:45:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.899 06:45:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.899 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:14:53.899 06:45:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.899 06:45:07 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.157 [2024-12-14 06:45:07.986868] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.157 06:45:08 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.416 06:45:08 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:54.416 06:45:08 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.674 06:45:08 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:54.674 06:45:08 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:54.933 06:45:08 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:55.500 06:45:09 -- target/nvmf_lvol.sh@29 -- # lvs=7f2d4df5-daaf-462e-9d69-d0f22b029fb6 00:14:55.500 06:45:09 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7f2d4df5-daaf-462e-9d69-d0f22b029fb6 lvol 20 00:14:55.758 06:45:09 -- target/nvmf_lvol.sh@32 -- # lvol=a794e49a-74c3-428e-ab6e-0cfcbd9e1cae 00:14:55.758 06:45:09 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:56.017 06:45:09 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a794e49a-74c3-428e-ab6e-0cfcbd9e1cae 00:14:56.275 06:45:10 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:56.534 [2024-12-14 06:45:10.275827] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.534 06:45:10 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.792 06:45:10 -- target/nvmf_lvol.sh@42 -- # perf_pid=72732 00:14:56.792 06:45:10 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:56.792 06:45:10 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:57.728 06:45:11 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a794e49a-74c3-428e-ab6e-0cfcbd9e1cae MY_SNAPSHOT 00:14:57.986 06:45:11 -- target/nvmf_lvol.sh@47 -- # snapshot=0a133f5d-a085-448b-b79d-e7e8285a9e8b 00:14:57.986 06:45:11 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a794e49a-74c3-428e-ab6e-0cfcbd9e1cae 30 00:14:58.245 06:45:12 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0a133f5d-a085-448b-b79d-e7e8285a9e8b MY_CLONE 00:14:58.811 06:45:12 -- target/nvmf_lvol.sh@49 -- # clone=d855f73f-0baf-45e5-aaaf-d758f258b964 00:14:58.811 06:45:12 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d855f73f-0baf-45e5-aaaf-d758f258b964 00:14:59.378 06:45:13 -- target/nvmf_lvol.sh@53 -- # wait 72732 00:15:07.496 Initializing NVMe Controllers 00:15:07.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:07.496 Controller IO queue size 128, less than required. 00:15:07.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:07.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:07.496 Initialization complete. Launching workers. 00:15:07.496 ======================================================== 00:15:07.496 Latency(us) 00:15:07.496 Device Information : IOPS MiB/s Average min max 00:15:07.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11132.20 43.49 11504.24 2695.80 114010.52 00:15:07.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11097.80 43.35 11533.54 3440.63 54119.81 00:15:07.496 ======================================================== 00:15:07.496 Total : 22230.00 86.84 11518.87 2695.80 114010.52 00:15:07.496 00:15:07.496 06:45:20 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:07.496 06:45:21 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a794e49a-74c3-428e-ab6e-0cfcbd9e1cae 00:15:07.496 06:45:21 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f2d4df5-daaf-462e-9d69-d0f22b029fb6 00:15:07.754 06:45:21 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:07.754 06:45:21 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:07.754 06:45:21 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:07.754 06:45:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:07.754 06:45:21 -- nvmf/common.sh@116 -- # sync 00:15:07.754 06:45:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:07.754 06:45:21 -- nvmf/common.sh@119 -- # set +e 00:15:07.754 06:45:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:07.754 06:45:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:07.754 rmmod nvme_tcp 00:15:07.754 rmmod nvme_fabrics 00:15:07.754 rmmod nvme_keyring 00:15:07.754 06:45:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:07.754 06:45:21 -- nvmf/common.sh@123 -- # set -e 00:15:07.754 06:45:21 -- nvmf/common.sh@124 -- # return 0 00:15:07.754 06:45:21 -- nvmf/common.sh@477 -- # '[' -n 72579 ']' 00:15:07.754 06:45:21 -- nvmf/common.sh@478 -- # killprocess 72579 00:15:07.754 06:45:21 -- common/autotest_common.sh@936 -- # '[' -z 72579 ']' 00:15:07.754 06:45:21 -- common/autotest_common.sh@940 -- # kill -0 72579 00:15:07.754 06:45:21 -- common/autotest_common.sh@941 -- # uname 00:15:07.754 06:45:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.754 06:45:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72579 00:15:07.754 06:45:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.754 killing process with pid 72579 00:15:07.754 06:45:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.754 06:45:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72579' 00:15:07.754 06:45:21 -- common/autotest_common.sh@955 -- # kill 72579 00:15:07.754 06:45:21 -- common/autotest_common.sh@960 -- # wait 72579 00:15:08.322 06:45:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:08.322 06:45:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:08.322 06:45:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:08.322 06:45:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.322 06:45:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:08.322 06:45:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.322 06:45:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.322 06:45:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.322 06:45:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:08.322 00:15:08.322 real 0m16.110s 00:15:08.322 user 1m6.675s 00:15:08.322 sys 0m3.888s 00:15:08.322 06:45:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:08.322 06:45:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.322 ************************************ 00:15:08.322 END TEST nvmf_lvol 00:15:08.322 ************************************ 00:15:08.322 06:45:22 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:08.322 06:45:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:08.322 06:45:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.322 06:45:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.322 ************************************ 00:15:08.322 START TEST nvmf_lvs_grow 00:15:08.322 ************************************ 00:15:08.322 06:45:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:08.322 * Looking for test storage... 00:15:08.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:08.322 06:45:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:08.322 06:45:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:08.322 06:45:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:08.581 06:45:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:08.581 06:45:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:08.581 06:45:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:08.581 06:45:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:08.581 06:45:22 -- scripts/common.sh@335 -- # IFS=.-: 00:15:08.581 06:45:22 -- scripts/common.sh@335 -- # read -ra ver1 00:15:08.581 06:45:22 -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.581 06:45:22 -- scripts/common.sh@336 -- # read -ra ver2 00:15:08.581 06:45:22 -- scripts/common.sh@337 -- # local 'op=<' 00:15:08.581 06:45:22 -- scripts/common.sh@339 -- # ver1_l=2 00:15:08.581 06:45:22 -- scripts/common.sh@340 -- # ver2_l=1 00:15:08.581 06:45:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:08.581 06:45:22 -- scripts/common.sh@343 -- # case "$op" in 00:15:08.581 06:45:22 -- scripts/common.sh@344 -- # : 1 00:15:08.581 06:45:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:08.581 06:45:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.581 06:45:22 -- scripts/common.sh@364 -- # decimal 1 00:15:08.581 06:45:22 -- scripts/common.sh@352 -- # local d=1 00:15:08.581 06:45:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.581 06:45:22 -- scripts/common.sh@354 -- # echo 1 00:15:08.581 06:45:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:08.581 06:45:22 -- scripts/common.sh@365 -- # decimal 2 00:15:08.581 06:45:22 -- scripts/common.sh@352 -- # local d=2 00:15:08.581 06:45:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.581 06:45:22 -- scripts/common.sh@354 -- # echo 2 00:15:08.581 06:45:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:08.581 06:45:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:08.581 06:45:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:08.581 06:45:22 -- scripts/common.sh@367 -- # return 0 00:15:08.581 06:45:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.581 06:45:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.581 --rc genhtml_branch_coverage=1 00:15:08.581 --rc genhtml_function_coverage=1 00:15:08.581 --rc genhtml_legend=1 00:15:08.581 --rc geninfo_all_blocks=1 00:15:08.581 --rc geninfo_unexecuted_blocks=1 00:15:08.581 00:15:08.581 ' 00:15:08.581 06:45:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.581 --rc genhtml_branch_coverage=1 00:15:08.581 --rc genhtml_function_coverage=1 00:15:08.581 --rc genhtml_legend=1 00:15:08.581 --rc geninfo_all_blocks=1 00:15:08.581 --rc geninfo_unexecuted_blocks=1 00:15:08.581 00:15:08.581 ' 00:15:08.581 06:45:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.581 --rc genhtml_branch_coverage=1 00:15:08.581 --rc genhtml_function_coverage=1 00:15:08.581 --rc genhtml_legend=1 00:15:08.581 --rc geninfo_all_blocks=1 00:15:08.581 --rc geninfo_unexecuted_blocks=1 00:15:08.581 00:15:08.581 ' 00:15:08.581 06:45:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.581 --rc genhtml_branch_coverage=1 00:15:08.581 --rc genhtml_function_coverage=1 00:15:08.581 --rc genhtml_legend=1 00:15:08.581 --rc geninfo_all_blocks=1 00:15:08.581 --rc geninfo_unexecuted_blocks=1 00:15:08.581 00:15:08.581 ' 00:15:08.581 06:45:22 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.581 06:45:22 -- nvmf/common.sh@7 -- # uname -s 00:15:08.581 06:45:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.581 06:45:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.581 06:45:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.581 06:45:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.581 06:45:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.581 06:45:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.581 06:45:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.581 06:45:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.581 06:45:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.581 06:45:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:08.581 06:45:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:08.581 06:45:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.581 06:45:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.581 06:45:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.581 06:45:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.581 06:45:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.581 06:45:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.581 06:45:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.581 06:45:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.581 06:45:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.581 06:45:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.581 06:45:22 -- paths/export.sh@5 -- # export PATH 00:15:08.581 06:45:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.581 06:45:22 -- nvmf/common.sh@46 -- # : 0 00:15:08.581 06:45:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.581 06:45:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.581 06:45:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.581 06:45:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.581 06:45:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.581 06:45:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.581 06:45:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.581 06:45:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.581 06:45:22 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.581 06:45:22 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.581 06:45:22 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:08.581 06:45:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.581 06:45:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.581 06:45:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.581 06:45:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.581 06:45:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.581 06:45:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.581 06:45:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.581 06:45:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.581 06:45:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:08.581 06:45:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:08.582 06:45:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.582 06:45:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.582 06:45:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.582 06:45:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:08.582 06:45:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.582 06:45:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.582 06:45:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.582 06:45:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.582 06:45:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.582 06:45:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.582 06:45:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.582 06:45:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.582 06:45:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:08.582 06:45:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:08.582 Cannot find device "nvmf_tgt_br" 00:15:08.582 06:45:22 -- nvmf/common.sh@154 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.582 Cannot find device "nvmf_tgt_br2" 00:15:08.582 06:45:22 -- nvmf/common.sh@155 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:08.582 06:45:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:08.582 Cannot find device "nvmf_tgt_br" 00:15:08.582 06:45:22 -- nvmf/common.sh@157 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:08.582 Cannot find device "nvmf_tgt_br2" 00:15:08.582 06:45:22 -- nvmf/common.sh@158 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:08.582 06:45:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:08.582 06:45:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.582 06:45:22 -- nvmf/common.sh@161 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.582 06:45:22 -- nvmf/common.sh@162 -- # true 00:15:08.582 06:45:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.582 06:45:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.582 06:45:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.582 06:45:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.582 06:45:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.841 06:45:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.841 06:45:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.841 06:45:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.841 06:45:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:08.841 06:45:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:08.841 06:45:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:08.841 06:45:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:08.841 06:45:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:08.841 06:45:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.841 06:45:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.841 06:45:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.841 06:45:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:08.841 06:45:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:08.841 06:45:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.841 06:45:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.841 06:45:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.841 06:45:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.841 06:45:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.841 06:45:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:08.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:15:08.841 00:15:08.841 --- 10.0.0.2 ping statistics --- 00:15:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.841 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:08.841 06:45:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:08.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:08.841 00:15:08.841 --- 10.0.0.3 ping statistics --- 00:15:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.841 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:08.841 06:45:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:08.841 00:15:08.841 --- 10.0.0.1 ping statistics --- 00:15:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.841 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:08.841 06:45:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.841 06:45:22 -- nvmf/common.sh@421 -- # return 0 00:15:08.841 06:45:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.841 06:45:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.841 06:45:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.841 06:45:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.841 06:45:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.841 06:45:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.841 06:45:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.841 06:45:22 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:08.841 06:45:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.841 06:45:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.841 06:45:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.841 06:45:22 -- nvmf/common.sh@469 -- # nvmfpid=73097 00:15:08.841 06:45:22 -- nvmf/common.sh@470 -- # waitforlisten 73097 00:15:08.841 06:45:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:08.841 06:45:22 -- common/autotest_common.sh@829 -- # '[' -z 73097 ']' 00:15:08.841 06:45:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.841 06:45:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.841 06:45:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.841 06:45:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.841 06:45:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.841 [2024-12-14 06:45:22.793938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.841 [2024-12-14 06:45:22.794057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.100 [2024-12-14 06:45:22.928886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.100 [2024-12-14 06:45:23.040089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.100 [2024-12-14 06:45:23.040261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.100 [2024-12-14 06:45:23.040275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.100 [2024-12-14 06:45:23.040283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.100 [2024-12-14 06:45:23.040345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.036 06:45:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.036 06:45:23 -- common/autotest_common.sh@862 -- # return 0 00:15:10.036 06:45:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.036 06:45:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.036 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:15:10.036 06:45:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.036 06:45:23 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.295 [2024-12-14 06:45:24.087428] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:10.295 06:45:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:10.295 06:45:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.295 06:45:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.295 ************************************ 00:15:10.295 START TEST lvs_grow_clean 00:15:10.295 ************************************ 00:15:10.295 06:45:24 -- common/autotest_common.sh@1114 -- # lvs_grow 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:10.295 06:45:24 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.554 06:45:24 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:10.554 06:45:24 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:10.813 06:45:24 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1dea0051-5e9a-4338-9c22-3135a791e248 00:15:10.813 06:45:24 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:10.813 06:45:24 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:11.071 06:45:24 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:11.072 06:45:24 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:11.072 06:45:24 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1dea0051-5e9a-4338-9c22-3135a791e248 lvol 150 00:15:11.330 06:45:25 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb 00:15:11.330 06:45:25 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.330 06:45:25 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:11.588 [2024-12-14 06:45:25.373955] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:11.588 [2024-12-14 06:45:25.374025] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:11.588 true 00:15:11.588 06:45:25 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:11.588 06:45:25 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:11.847 06:45:25 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:11.847 06:45:25 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:12.105 06:45:25 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb 00:15:12.105 06:45:26 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:12.363 [2024-12-14 06:45:26.258714] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.363 06:45:26 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.622 06:45:26 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73265 00:15:12.622 06:45:26 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:12.622 06:45:26 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.622 06:45:26 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73265 /var/tmp/bdevperf.sock 00:15:12.622 06:45:26 -- common/autotest_common.sh@829 -- # '[' -z 73265 ']' 00:15:12.622 06:45:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.622 06:45:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.622 06:45:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.622 06:45:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.622 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.622 [2024-12-14 06:45:26.542426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:12.622 [2024-12-14 06:45:26.542531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73265 ] 00:15:12.880 [2024-12-14 06:45:26.684621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.880 [2024-12-14 06:45:26.798672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.448 06:45:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.448 06:45:27 -- common/autotest_common.sh@862 -- # return 0 00:15:13.448 06:45:27 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:14.015 Nvme0n1 00:15:14.015 06:45:27 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:14.015 [ 00:15:14.015 { 00:15:14.015 "aliases": [ 00:15:14.015 "b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb" 00:15:14.015 ], 00:15:14.015 "assigned_rate_limits": { 00:15:14.015 "r_mbytes_per_sec": 0, 00:15:14.015 "rw_ios_per_sec": 0, 00:15:14.015 "rw_mbytes_per_sec": 0, 00:15:14.015 "w_mbytes_per_sec": 0 00:15:14.015 }, 00:15:14.015 "block_size": 4096, 00:15:14.015 "claimed": false, 00:15:14.015 "driver_specific": { 00:15:14.015 "mp_policy": "active_passive", 00:15:14.015 "nvme": [ 00:15:14.015 { 00:15:14.015 "ctrlr_data": { 00:15:14.015 "ana_reporting": false, 00:15:14.015 "cntlid": 1, 00:15:14.015 "firmware_revision": "24.01.1", 00:15:14.015 "model_number": "SPDK bdev Controller", 00:15:14.015 "multi_ctrlr": true, 00:15:14.015 "oacs": { 00:15:14.015 "firmware": 0, 00:15:14.015 "format": 0, 00:15:14.015 "ns_manage": 0, 00:15:14.015 "security": 0 00:15:14.015 }, 00:15:14.015 "serial_number": "SPDK0", 00:15:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.015 "vendor_id": "0x8086" 00:15:14.015 }, 00:15:14.015 "ns_data": { 00:15:14.015 "can_share": true, 00:15:14.015 "id": 1 00:15:14.015 }, 00:15:14.015 "trid": { 00:15:14.015 "adrfam": "IPv4", 00:15:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.015 "traddr": "10.0.0.2", 00:15:14.015 "trsvcid": "4420", 00:15:14.015 "trtype": "TCP" 00:15:14.015 }, 00:15:14.015 "vs": { 00:15:14.015 "nvme_version": "1.3" 00:15:14.015 } 00:15:14.015 } 00:15:14.015 ] 00:15:14.015 }, 00:15:14.015 "name": "Nvme0n1", 00:15:14.015 "num_blocks": 38912, 00:15:14.015 "product_name": "NVMe disk", 00:15:14.015 "supported_io_types": { 00:15:14.015 "abort": true, 00:15:14.015 "compare": true, 00:15:14.015 "compare_and_write": true, 00:15:14.015 "flush": true, 00:15:14.015 "nvme_admin": true, 00:15:14.015 "nvme_io": true, 00:15:14.015 "read": true, 00:15:14.015 "reset": true, 00:15:14.015 "unmap": true, 00:15:14.015 "write": true, 00:15:14.015 "write_zeroes": true 00:15:14.015 }, 00:15:14.015 "uuid": "b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb", 00:15:14.015 "zoned": false 00:15:14.015 } 00:15:14.015 ] 00:15:14.015 06:45:27 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.015 06:45:27 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73308 00:15:14.015 06:45:27 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:14.274 Running I/O for 10 seconds... 00:15:15.209 Latency(us) 00:15:15.209 [2024-12-14T06:45:29.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.209 [2024-12-14T06:45:29.201Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.209 Nvme0n1 : 1.00 8065.00 31.50 0.00 0.00 0.00 0.00 0.00 00:15:15.209 [2024-12-14T06:45:29.201Z] =================================================================================================================== 00:15:15.209 [2024-12-14T06:45:29.201Z] Total : 8065.00 31.50 0.00 0.00 0.00 0.00 0.00 00:15:15.209 00:15:16.144 06:45:29 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:16.144 [2024-12-14T06:45:30.136Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.144 Nvme0n1 : 2.00 8055.00 31.46 0.00 0.00 0.00 0.00 0.00 00:15:16.144 [2024-12-14T06:45:30.136Z] =================================================================================================================== 00:15:16.144 [2024-12-14T06:45:30.136Z] Total : 8055.00 31.46 0.00 0.00 0.00 0.00 0.00 00:15:16.144 00:15:16.403 true 00:15:16.403 06:45:30 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:16.403 06:45:30 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:16.661 06:45:30 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:16.661 06:45:30 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:16.661 06:45:30 -- target/nvmf_lvs_grow.sh@65 -- # wait 73308 00:15:17.228 [2024-12-14T06:45:31.220Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.228 Nvme0n1 : 3.00 7994.00 31.23 0.00 0.00 0.00 0.00 0.00 00:15:17.228 [2024-12-14T06:45:31.220Z] =================================================================================================================== 00:15:17.228 [2024-12-14T06:45:31.220Z] Total : 7994.00 31.23 0.00 0.00 0.00 0.00 0.00 00:15:17.228 00:15:18.164 [2024-12-14T06:45:32.156Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.164 Nvme0n1 : 4.00 8033.75 31.38 0.00 0.00 0.00 0.00 0.00 00:15:18.164 [2024-12-14T06:45:32.156Z] =================================================================================================================== 00:15:18.164 [2024-12-14T06:45:32.156Z] Total : 8033.75 31.38 0.00 0.00 0.00 0.00 0.00 00:15:18.164 00:15:19.099 [2024-12-14T06:45:33.091Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.099 Nvme0n1 : 5.00 8019.40 31.33 0.00 0.00 0.00 0.00 0.00 00:15:19.099 [2024-12-14T06:45:33.091Z] =================================================================================================================== 00:15:19.099 [2024-12-14T06:45:33.091Z] Total : 8019.40 31.33 0.00 0.00 0.00 0.00 0.00 00:15:19.099 00:15:20.474 [2024-12-14T06:45:34.466Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.474 Nvme0n1 : 6.00 8024.50 31.35 0.00 0.00 0.00 0.00 0.00 00:15:20.474 [2024-12-14T06:45:34.466Z] =================================================================================================================== 00:15:20.474 [2024-12-14T06:45:34.466Z] Total : 8024.50 31.35 0.00 0.00 0.00 0.00 0.00 00:15:20.474 00:15:21.470 [2024-12-14T06:45:35.462Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.470 Nvme0n1 : 7.00 8019.29 31.33 0.00 0.00 0.00 0.00 0.00 00:15:21.470 [2024-12-14T06:45:35.462Z] =================================================================================================================== 00:15:21.470 [2024-12-14T06:45:35.462Z] Total : 8019.29 31.33 0.00 0.00 0.00 0.00 0.00 00:15:21.470 00:15:22.406 [2024-12-14T06:45:36.398Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.406 Nvme0n1 : 8.00 8012.75 31.30 0.00 0.00 0.00 0.00 0.00 00:15:22.406 [2024-12-14T06:45:36.398Z] =================================================================================================================== 00:15:22.406 [2024-12-14T06:45:36.398Z] Total : 8012.75 31.30 0.00 0.00 0.00 0.00 0.00 00:15:22.406 00:15:23.342 [2024-12-14T06:45:37.334Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.342 Nvme0n1 : 9.00 8003.11 31.26 0.00 0.00 0.00 0.00 0.00 00:15:23.342 [2024-12-14T06:45:37.334Z] =================================================================================================================== 00:15:23.342 [2024-12-14T06:45:37.334Z] Total : 8003.11 31.26 0.00 0.00 0.00 0.00 0.00 00:15:23.342 00:15:24.277 [2024-12-14T06:45:38.269Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.277 Nvme0n1 : 10.00 8012.30 31.30 0.00 0.00 0.00 0.00 0.00 00:15:24.277 [2024-12-14T06:45:38.269Z] =================================================================================================================== 00:15:24.277 [2024-12-14T06:45:38.269Z] Total : 8012.30 31.30 0.00 0.00 0.00 0.00 0.00 00:15:24.277 00:15:24.277 00:15:24.277 Latency(us) 00:15:24.277 [2024-12-14T06:45:38.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.277 [2024-12-14T06:45:38.269Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.277 Nvme0n1 : 10.01 8020.92 31.33 0.00 0.00 15954.50 4736.47 57195.05 00:15:24.277 [2024-12-14T06:45:38.270Z] =================================================================================================================== 00:15:24.278 [2024-12-14T06:45:38.270Z] Total : 8020.92 31.33 0.00 0.00 15954.50 4736.47 57195.05 00:15:24.278 0 00:15:24.278 06:45:38 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73265 00:15:24.278 06:45:38 -- common/autotest_common.sh@936 -- # '[' -z 73265 ']' 00:15:24.278 06:45:38 -- common/autotest_common.sh@940 -- # kill -0 73265 00:15:24.278 06:45:38 -- common/autotest_common.sh@941 -- # uname 00:15:24.278 06:45:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.278 06:45:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73265 00:15:24.278 06:45:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:24.278 06:45:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:24.278 killing process with pid 73265 00:15:24.278 06:45:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73265' 00:15:24.278 06:45:38 -- common/autotest_common.sh@955 -- # kill 73265 00:15:24.278 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.278 00:15:24.278 Latency(us) 00:15:24.278 [2024-12-14T06:45:38.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.278 [2024-12-14T06:45:38.270Z] =================================================================================================================== 00:15:24.278 [2024-12-14T06:45:38.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.278 06:45:38 -- common/autotest_common.sh@960 -- # wait 73265 00:15:24.536 06:45:38 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:24.795 06:45:38 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:24.795 06:45:38 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:25.054 06:45:38 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:25.054 06:45:38 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:25.054 06:45:38 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:25.312 [2024-12-14 06:45:39.128631] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:25.312 06:45:39 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:25.312 06:45:39 -- common/autotest_common.sh@650 -- # local es=0 00:15:25.312 06:45:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:25.312 06:45:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.312 06:45:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.312 06:45:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.312 06:45:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.312 06:45:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.312 06:45:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.312 06:45:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.312 06:45:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:25.312 06:45:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:25.570 2024/12/14 06:45:39 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1dea0051-5e9a-4338-9c22-3135a791e248], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:25.570 request: 00:15:25.570 { 00:15:25.570 "method": "bdev_lvol_get_lvstores", 00:15:25.570 "params": { 00:15:25.570 "uuid": "1dea0051-5e9a-4338-9c22-3135a791e248" 00:15:25.570 } 00:15:25.570 } 00:15:25.570 Got JSON-RPC error response 00:15:25.570 GoRPCClient: error on JSON-RPC call 00:15:25.570 06:45:39 -- common/autotest_common.sh@653 -- # es=1 00:15:25.570 06:45:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:25.570 06:45:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:25.570 06:45:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:25.570 06:45:39 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:25.829 aio_bdev 00:15:25.829 06:45:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb 00:15:25.829 06:45:39 -- common/autotest_common.sh@897 -- # local bdev_name=b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb 00:15:25.829 06:45:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.829 06:45:39 -- common/autotest_common.sh@899 -- # local i 00:15:25.829 06:45:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.829 06:45:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.829 06:45:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:26.087 06:45:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb -t 2000 00:15:26.346 [ 00:15:26.346 { 00:15:26.346 "aliases": [ 00:15:26.346 "lvs/lvol" 00:15:26.346 ], 00:15:26.346 "assigned_rate_limits": { 00:15:26.346 "r_mbytes_per_sec": 0, 00:15:26.346 "rw_ios_per_sec": 0, 00:15:26.346 "rw_mbytes_per_sec": 0, 00:15:26.346 "w_mbytes_per_sec": 0 00:15:26.346 }, 00:15:26.346 "block_size": 4096, 00:15:26.346 "claimed": false, 00:15:26.346 "driver_specific": { 00:15:26.346 "lvol": { 00:15:26.346 "base_bdev": "aio_bdev", 00:15:26.346 "clone": false, 00:15:26.346 "esnap_clone": false, 00:15:26.346 "lvol_store_uuid": "1dea0051-5e9a-4338-9c22-3135a791e248", 00:15:26.346 "snapshot": false, 00:15:26.346 "thin_provision": false 00:15:26.346 } 00:15:26.346 }, 00:15:26.346 "name": "b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb", 00:15:26.346 "num_blocks": 38912, 00:15:26.346 "product_name": "Logical Volume", 00:15:26.346 "supported_io_types": { 00:15:26.346 "abort": false, 00:15:26.346 "compare": false, 00:15:26.346 "compare_and_write": false, 00:15:26.346 "flush": false, 00:15:26.346 "nvme_admin": false, 00:15:26.346 "nvme_io": false, 00:15:26.346 "read": true, 00:15:26.346 "reset": true, 00:15:26.346 "unmap": true, 00:15:26.346 "write": true, 00:15:26.346 "write_zeroes": true 00:15:26.346 }, 00:15:26.346 "uuid": "b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb", 00:15:26.346 "zoned": false 00:15:26.346 } 00:15:26.346 ] 00:15:26.346 06:45:40 -- common/autotest_common.sh@905 -- # return 0 00:15:26.346 06:45:40 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:26.346 06:45:40 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:26.604 06:45:40 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:26.604 06:45:40 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:26.604 06:45:40 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:26.862 06:45:40 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:26.862 06:45:40 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b37a8b95-1a2e-4bb6-b0d7-6cb658bc8dcb 00:15:27.121 06:45:40 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1dea0051-5e9a-4338-9c22-3135a791e248 00:15:27.379 06:45:41 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:27.379 06:45:41 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:27.947 00:15:27.947 real 0m17.576s 00:15:27.947 user 0m16.961s 00:15:27.947 sys 0m2.025s 00:15:27.947 06:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:27.947 ************************************ 00:15:27.947 END TEST lvs_grow_clean 00:15:27.947 ************************************ 00:15:27.947 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:27.947 06:45:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:27.947 06:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.947 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:15:27.947 ************************************ 00:15:27.947 START TEST lvs_grow_dirty 00:15:27.947 ************************************ 00:15:27.947 06:45:41 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:27.947 06:45:41 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:28.205 06:45:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:28.206 06:45:42 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:28.464 06:45:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:28.464 06:45:42 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:28.464 06:45:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:28.722 06:45:42 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:28.722 06:45:42 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:28.722 06:45:42 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 lvol 150 00:15:28.981 06:45:42 -- target/nvmf_lvs_grow.sh@33 -- # lvol=da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:28.981 06:45:42 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:28.981 06:45:42 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:29.240 [2024-12-14 06:45:42.996741] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:29.240 [2024-12-14 06:45:42.996823] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:29.240 true 00:15:29.240 06:45:43 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:29.240 06:45:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:29.240 06:45:43 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:29.240 06:45:43 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:29.807 06:45:43 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:29.807 06:45:43 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:30.065 06:45:43 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:30.324 06:45:44 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:30.324 06:45:44 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73691 00:15:30.324 06:45:44 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.324 06:45:44 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73691 /var/tmp/bdevperf.sock 00:15:30.324 06:45:44 -- common/autotest_common.sh@829 -- # '[' -z 73691 ']' 00:15:30.324 06:45:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.324 06:45:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.324 06:45:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.324 06:45:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.324 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.324 [2024-12-14 06:45:44.209159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:30.324 [2024-12-14 06:45:44.209246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:15:30.583 [2024-12-14 06:45:44.345382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.583 [2024-12-14 06:45:44.468263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.517 06:45:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.517 06:45:45 -- common/autotest_common.sh@862 -- # return 0 00:15:31.517 06:45:45 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:31.517 Nvme0n1 00:15:31.517 06:45:45 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:31.776 [ 00:15:31.776 { 00:15:31.776 "aliases": [ 00:15:31.776 "da10a100-cebf-47d2-b4f8-e1b4e3237568" 00:15:31.776 ], 00:15:31.776 "assigned_rate_limits": { 00:15:31.776 "r_mbytes_per_sec": 0, 00:15:31.776 "rw_ios_per_sec": 0, 00:15:31.776 "rw_mbytes_per_sec": 0, 00:15:31.776 "w_mbytes_per_sec": 0 00:15:31.776 }, 00:15:31.776 "block_size": 4096, 00:15:31.776 "claimed": false, 00:15:31.776 "driver_specific": { 00:15:31.776 "mp_policy": "active_passive", 00:15:31.776 "nvme": [ 00:15:31.776 { 00:15:31.776 "ctrlr_data": { 00:15:31.776 "ana_reporting": false, 00:15:31.776 "cntlid": 1, 00:15:31.776 "firmware_revision": "24.01.1", 00:15:31.776 "model_number": "SPDK bdev Controller", 00:15:31.776 "multi_ctrlr": true, 00:15:31.776 "oacs": { 00:15:31.776 "firmware": 0, 00:15:31.776 "format": 0, 00:15:31.776 "ns_manage": 0, 00:15:31.776 "security": 0 00:15:31.776 }, 00:15:31.776 "serial_number": "SPDK0", 00:15:31.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:31.776 "vendor_id": "0x8086" 00:15:31.776 }, 00:15:31.776 "ns_data": { 00:15:31.776 "can_share": true, 00:15:31.776 "id": 1 00:15:31.776 }, 00:15:31.776 "trid": { 00:15:31.776 "adrfam": "IPv4", 00:15:31.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:31.776 "traddr": "10.0.0.2", 00:15:31.776 "trsvcid": "4420", 00:15:31.776 "trtype": "TCP" 00:15:31.776 }, 00:15:31.776 "vs": { 00:15:31.776 "nvme_version": "1.3" 00:15:31.776 } 00:15:31.776 } 00:15:31.776 ] 00:15:31.776 }, 00:15:31.776 "name": "Nvme0n1", 00:15:31.776 "num_blocks": 38912, 00:15:31.776 "product_name": "NVMe disk", 00:15:31.776 "supported_io_types": { 00:15:31.776 "abort": true, 00:15:31.776 "compare": true, 00:15:31.776 "compare_and_write": true, 00:15:31.776 "flush": true, 00:15:31.776 "nvme_admin": true, 00:15:31.776 "nvme_io": true, 00:15:31.776 "read": true, 00:15:31.776 "reset": true, 00:15:31.776 "unmap": true, 00:15:31.776 "write": true, 00:15:31.776 "write_zeroes": true 00:15:31.776 }, 00:15:31.776 "uuid": "da10a100-cebf-47d2-b4f8-e1b4e3237568", 00:15:31.776 "zoned": false 00:15:31.776 } 00:15:31.776 ] 00:15:31.776 06:45:45 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.776 06:45:45 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73739 00:15:31.776 06:45:45 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:31.776 Running I/O for 10 seconds... 00:15:33.152 Latency(us) 00:15:33.152 [2024-12-14T06:45:47.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.152 [2024-12-14T06:45:47.144Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.152 Nvme0n1 : 1.00 8325.00 32.52 0.00 0.00 0.00 0.00 0.00 00:15:33.152 [2024-12-14T06:45:47.144Z] =================================================================================================================== 00:15:33.152 [2024-12-14T06:45:47.144Z] Total : 8325.00 32.52 0.00 0.00 0.00 0.00 0.00 00:15:33.152 00:15:33.719 06:45:47 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:33.978 [2024-12-14T06:45:47.970Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.978 Nvme0n1 : 2.00 8366.50 32.68 0.00 0.00 0.00 0.00 0.00 00:15:33.978 [2024-12-14T06:45:47.970Z] =================================================================================================================== 00:15:33.978 [2024-12-14T06:45:47.970Z] Total : 8366.50 32.68 0.00 0.00 0.00 0.00 0.00 00:15:33.978 00:15:34.236 true 00:15:34.236 06:45:47 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:34.236 06:45:47 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:34.494 06:45:48 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:34.494 06:45:48 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:34.494 06:45:48 -- target/nvmf_lvs_grow.sh@65 -- # wait 73739 00:15:35.061 [2024-12-14T06:45:49.053Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.061 Nvme0n1 : 3.00 8254.00 32.24 0.00 0.00 0.00 0.00 0.00 00:15:35.061 [2024-12-14T06:45:49.053Z] =================================================================================================================== 00:15:35.061 [2024-12-14T06:45:49.053Z] Total : 8254.00 32.24 0.00 0.00 0.00 0.00 0.00 00:15:35.061 00:15:35.997 [2024-12-14T06:45:49.989Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.997 Nvme0n1 : 4.00 8258.50 32.26 0.00 0.00 0.00 0.00 0.00 00:15:35.997 [2024-12-14T06:45:49.989Z] =================================================================================================================== 00:15:35.997 [2024-12-14T06:45:49.989Z] Total : 8258.50 32.26 0.00 0.00 0.00 0.00 0.00 00:15:35.997 00:15:36.932 [2024-12-14T06:45:50.924Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.932 Nvme0n1 : 5.00 8237.20 32.18 0.00 0.00 0.00 0.00 0.00 00:15:36.932 [2024-12-14T06:45:50.924Z] =================================================================================================================== 00:15:36.932 [2024-12-14T06:45:50.924Z] Total : 8237.20 32.18 0.00 0.00 0.00 0.00 0.00 00:15:36.932 00:15:37.869 [2024-12-14T06:45:51.861Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.869 Nvme0n1 : 6.00 8172.50 31.92 0.00 0.00 0.00 0.00 0.00 00:15:37.869 [2024-12-14T06:45:51.861Z] =================================================================================================================== 00:15:37.869 [2024-12-14T06:45:51.861Z] Total : 8172.50 31.92 0.00 0.00 0.00 0.00 0.00 00:15:37.869 00:15:38.804 [2024-12-14T06:45:52.796Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.804 Nvme0n1 : 7.00 8153.71 31.85 0.00 0.00 0.00 0.00 0.00 00:15:38.804 [2024-12-14T06:45:52.796Z] =================================================================================================================== 00:15:38.804 [2024-12-14T06:45:52.796Z] Total : 8153.71 31.85 0.00 0.00 0.00 0.00 0.00 00:15:38.804 00:15:40.181 [2024-12-14T06:45:54.173Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.181 Nvme0n1 : 8.00 7896.62 30.85 0.00 0.00 0.00 0.00 0.00 00:15:40.181 [2024-12-14T06:45:54.173Z] =================================================================================================================== 00:15:40.181 [2024-12-14T06:45:54.173Z] Total : 7896.62 30.85 0.00 0.00 0.00 0.00 0.00 00:15:40.181 00:15:41.118 [2024-12-14T06:45:55.110Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.118 Nvme0n1 : 9.00 7851.11 30.67 0.00 0.00 0.00 0.00 0.00 00:15:41.118 [2024-12-14T06:45:55.110Z] =================================================================================================================== 00:15:41.118 [2024-12-14T06:45:55.110Z] Total : 7851.11 30.67 0.00 0.00 0.00 0.00 0.00 00:15:41.118 00:15:42.052 [2024-12-14T06:45:56.044Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.052 Nvme0n1 : 10.00 7820.10 30.55 0.00 0.00 0.00 0.00 0.00 00:15:42.052 [2024-12-14T06:45:56.045Z] =================================================================================================================== 00:15:42.053 [2024-12-14T06:45:56.045Z] Total : 7820.10 30.55 0.00 0.00 0.00 0.00 0.00 00:15:42.053 00:15:42.053 00:15:42.053 Latency(us) 00:15:42.053 [2024-12-14T06:45:56.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.053 [2024-12-14T06:45:56.045Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.053 Nvme0n1 : 10.01 7824.37 30.56 0.00 0.00 16354.79 5481.19 226873.72 00:15:42.053 [2024-12-14T06:45:56.045Z] =================================================================================================================== 00:15:42.053 [2024-12-14T06:45:56.045Z] Total : 7824.37 30.56 0.00 0.00 16354.79 5481.19 226873.72 00:15:42.053 0 00:15:42.053 06:45:55 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73691 00:15:42.053 06:45:55 -- common/autotest_common.sh@936 -- # '[' -z 73691 ']' 00:15:42.053 06:45:55 -- common/autotest_common.sh@940 -- # kill -0 73691 00:15:42.053 06:45:55 -- common/autotest_common.sh@941 -- # uname 00:15:42.053 06:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.053 06:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73691 00:15:42.053 killing process with pid 73691 00:15:42.053 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.053 00:15:42.053 Latency(us) 00:15:42.053 [2024-12-14T06:45:56.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.053 [2024-12-14T06:45:56.045Z] =================================================================================================================== 00:15:42.053 [2024-12-14T06:45:56.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.053 06:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:42.053 06:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:42.053 06:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73691' 00:15:42.053 06:45:55 -- common/autotest_common.sh@955 -- # kill 73691 00:15:42.053 06:45:55 -- common/autotest_common.sh@960 -- # wait 73691 00:15:42.312 06:45:56 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:42.570 06:45:56 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:42.570 06:45:56 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73097 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@74 -- # wait 73097 00:15:42.829 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73097 Killed "${NVMF_APP[@]}" "$@" 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:42.829 06:45:56 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:42.829 06:45:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.829 06:45:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:42.829 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:42.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.829 06:45:56 -- nvmf/common.sh@469 -- # nvmfpid=73889 00:15:42.829 06:45:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:42.829 06:45:56 -- nvmf/common.sh@470 -- # waitforlisten 73889 00:15:42.829 06:45:56 -- common/autotest_common.sh@829 -- # '[' -z 73889 ']' 00:15:42.829 06:45:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.829 06:45:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.829 06:45:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.829 06:45:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.829 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:42.829 [2024-12-14 06:45:56.720773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.829 [2024-12-14 06:45:56.721110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.088 [2024-12-14 06:45:56.854676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.088 [2024-12-14 06:45:56.955666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:43.088 [2024-12-14 06:45:56.956144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.088 [2024-12-14 06:45:56.956266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.088 [2024-12-14 06:45:56.956439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.088 [2024-12-14 06:45:56.956490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.024 06:45:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.024 06:45:57 -- common/autotest_common.sh@862 -- # return 0 00:15:44.024 06:45:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:44.024 06:45:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.024 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:44.024 06:45:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.024 06:45:57 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:44.024 [2024-12-14 06:45:58.004838] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:44.024 [2024-12-14 06:45:58.005524] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:44.024 [2024-12-14 06:45:58.005862] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:44.282 06:45:58 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:44.282 06:45:58 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:44.283 06:45:58 -- common/autotest_common.sh@897 -- # local bdev_name=da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:44.283 06:45:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.283 06:45:58 -- common/autotest_common.sh@899 -- # local i 00:15:44.283 06:45:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.283 06:45:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.283 06:45:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:44.541 06:45:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da10a100-cebf-47d2-b4f8-e1b4e3237568 -t 2000 00:15:44.800 [ 00:15:44.800 { 00:15:44.800 "aliases": [ 00:15:44.800 "lvs/lvol" 00:15:44.800 ], 00:15:44.800 "assigned_rate_limits": { 00:15:44.800 "r_mbytes_per_sec": 0, 00:15:44.800 "rw_ios_per_sec": 0, 00:15:44.800 "rw_mbytes_per_sec": 0, 00:15:44.800 "w_mbytes_per_sec": 0 00:15:44.800 }, 00:15:44.800 "block_size": 4096, 00:15:44.800 "claimed": false, 00:15:44.800 "driver_specific": { 00:15:44.800 "lvol": { 00:15:44.800 "base_bdev": "aio_bdev", 00:15:44.800 "clone": false, 00:15:44.800 "esnap_clone": false, 00:15:44.800 "lvol_store_uuid": "fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4", 00:15:44.800 "snapshot": false, 00:15:44.800 "thin_provision": false 00:15:44.800 } 00:15:44.800 }, 00:15:44.800 "name": "da10a100-cebf-47d2-b4f8-e1b4e3237568", 00:15:44.800 "num_blocks": 38912, 00:15:44.800 "product_name": "Logical Volume", 00:15:44.800 "supported_io_types": { 00:15:44.800 "abort": false, 00:15:44.800 "compare": false, 00:15:44.800 "compare_and_write": false, 00:15:44.800 "flush": false, 00:15:44.800 "nvme_admin": false, 00:15:44.800 "nvme_io": false, 00:15:44.800 "read": true, 00:15:44.800 "reset": true, 00:15:44.800 "unmap": true, 00:15:44.800 "write": true, 00:15:44.800 "write_zeroes": true 00:15:44.800 }, 00:15:44.800 "uuid": "da10a100-cebf-47d2-b4f8-e1b4e3237568", 00:15:44.800 "zoned": false 00:15:44.800 } 00:15:44.800 ] 00:15:44.800 06:45:58 -- common/autotest_common.sh@905 -- # return 0 00:15:44.800 06:45:58 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:44.800 06:45:58 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:45.058 06:45:58 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:45.058 06:45:58 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:45.058 06:45:58 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:45.316 06:45:59 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:45.316 06:45:59 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:45.574 [2024-12-14 06:45:59.362193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:45.574 06:45:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:45.574 06:45:59 -- common/autotest_common.sh@650 -- # local es=0 00:15:45.574 06:45:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:45.574 06:45:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.574 06:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.574 06:45:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.574 06:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.575 06:45:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.575 06:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.575 06:45:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.575 06:45:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.575 06:45:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:45.833 2024/12/14 06:45:59 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:45.833 request: 00:15:45.833 { 00:15:45.833 "method": "bdev_lvol_get_lvstores", 00:15:45.833 "params": { 00:15:45.833 "uuid": "fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4" 00:15:45.833 } 00:15:45.833 } 00:15:45.833 Got JSON-RPC error response 00:15:45.833 GoRPCClient: error on JSON-RPC call 00:15:45.833 06:45:59 -- common/autotest_common.sh@653 -- # es=1 00:15:45.833 06:45:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:45.833 06:45:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:45.833 06:45:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:45.833 06:45:59 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:46.091 aio_bdev 00:15:46.091 06:45:59 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:46.091 06:45:59 -- common/autotest_common.sh@897 -- # local bdev_name=da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:46.091 06:45:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.091 06:45:59 -- common/autotest_common.sh@899 -- # local i 00:15:46.091 06:45:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.091 06:45:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.091 06:45:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:46.350 06:46:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da10a100-cebf-47d2-b4f8-e1b4e3237568 -t 2000 00:15:46.613 [ 00:15:46.613 { 00:15:46.613 "aliases": [ 00:15:46.613 "lvs/lvol" 00:15:46.613 ], 00:15:46.613 "assigned_rate_limits": { 00:15:46.613 "r_mbytes_per_sec": 0, 00:15:46.613 "rw_ios_per_sec": 0, 00:15:46.613 "rw_mbytes_per_sec": 0, 00:15:46.613 "w_mbytes_per_sec": 0 00:15:46.613 }, 00:15:46.613 "block_size": 4096, 00:15:46.613 "claimed": false, 00:15:46.613 "driver_specific": { 00:15:46.613 "lvol": { 00:15:46.613 "base_bdev": "aio_bdev", 00:15:46.613 "clone": false, 00:15:46.613 "esnap_clone": false, 00:15:46.613 "lvol_store_uuid": "fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4", 00:15:46.613 "snapshot": false, 00:15:46.613 "thin_provision": false 00:15:46.613 } 00:15:46.613 }, 00:15:46.613 "name": "da10a100-cebf-47d2-b4f8-e1b4e3237568", 00:15:46.613 "num_blocks": 38912, 00:15:46.613 "product_name": "Logical Volume", 00:15:46.613 "supported_io_types": { 00:15:46.613 "abort": false, 00:15:46.613 "compare": false, 00:15:46.613 "compare_and_write": false, 00:15:46.613 "flush": false, 00:15:46.613 "nvme_admin": false, 00:15:46.613 "nvme_io": false, 00:15:46.613 "read": true, 00:15:46.613 "reset": true, 00:15:46.613 "unmap": true, 00:15:46.613 "write": true, 00:15:46.613 "write_zeroes": true 00:15:46.613 }, 00:15:46.613 "uuid": "da10a100-cebf-47d2-b4f8-e1b4e3237568", 00:15:46.613 "zoned": false 00:15:46.613 } 00:15:46.613 ] 00:15:46.613 06:46:00 -- common/autotest_common.sh@905 -- # return 0 00:15:46.613 06:46:00 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:46.613 06:46:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:46.881 06:46:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:46.881 06:46:00 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:46.881 06:46:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:47.139 06:46:01 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:47.139 06:46:01 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete da10a100-cebf-47d2-b4f8-e1b4e3237568 00:15:47.397 06:46:01 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe2325f0-c9a0-4b33-b8be-86b66bdbe0c4 00:15:47.656 06:46:01 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:47.914 06:46:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:48.172 ************************************ 00:15:48.172 END TEST lvs_grow_dirty 00:15:48.172 ************************************ 00:15:48.172 00:15:48.172 real 0m20.410s 00:15:48.172 user 0m39.864s 00:15:48.172 sys 0m9.058s 00:15:48.172 06:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.172 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:15:48.430 06:46:02 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:48.430 06:46:02 -- common/autotest_common.sh@806 -- # type=--id 00:15:48.430 06:46:02 -- common/autotest_common.sh@807 -- # id=0 00:15:48.430 06:46:02 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:48.430 06:46:02 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:48.430 06:46:02 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:48.430 06:46:02 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:48.430 06:46:02 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:48.430 06:46:02 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:48.430 nvmf_trace.0 00:15:48.430 06:46:02 -- common/autotest_common.sh@821 -- # return 0 00:15:48.430 06:46:02 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:48.430 06:46:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.430 06:46:02 -- nvmf/common.sh@116 -- # sync 00:15:48.998 06:46:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.998 06:46:02 -- nvmf/common.sh@119 -- # set +e 00:15:48.998 06:46:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.998 06:46:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.998 rmmod nvme_tcp 00:15:48.998 rmmod nvme_fabrics 00:15:48.998 rmmod nvme_keyring 00:15:48.998 06:46:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.998 06:46:02 -- nvmf/common.sh@123 -- # set -e 00:15:48.998 06:46:02 -- nvmf/common.sh@124 -- # return 0 00:15:48.998 06:46:02 -- nvmf/common.sh@477 -- # '[' -n 73889 ']' 00:15:48.998 06:46:02 -- nvmf/common.sh@478 -- # killprocess 73889 00:15:48.998 06:46:02 -- common/autotest_common.sh@936 -- # '[' -z 73889 ']' 00:15:48.998 06:46:02 -- common/autotest_common.sh@940 -- # kill -0 73889 00:15:48.998 06:46:02 -- common/autotest_common.sh@941 -- # uname 00:15:48.998 06:46:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.998 06:46:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73889 00:15:48.998 06:46:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.998 06:46:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.998 06:46:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73889' 00:15:48.998 killing process with pid 73889 00:15:48.998 06:46:02 -- common/autotest_common.sh@955 -- # kill 73889 00:15:48.998 06:46:02 -- common/autotest_common.sh@960 -- # wait 73889 00:15:49.256 06:46:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:49.256 06:46:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:49.256 06:46:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:49.256 06:46:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.256 06:46:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:49.256 06:46:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.256 06:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.256 06:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.256 06:46:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:49.256 00:15:49.256 real 0m41.028s 00:15:49.256 user 1m3.936s 00:15:49.256 sys 0m12.220s 00:15:49.256 06:46:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:49.256 ************************************ 00:15:49.256 END TEST nvmf_lvs_grow 00:15:49.256 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:15:49.256 ************************************ 00:15:49.516 06:46:03 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:49.516 06:46:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:49.516 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:15:49.516 ************************************ 00:15:49.516 START TEST nvmf_bdev_io_wait 00:15:49.516 ************************************ 00:15:49.516 06:46:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:49.516 * Looking for test storage... 00:15:49.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.516 06:46:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:49.516 06:46:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:49.516 06:46:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:49.516 06:46:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:49.516 06:46:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:49.516 06:46:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:49.516 06:46:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:49.516 06:46:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:49.516 06:46:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.516 06:46:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:49.516 06:46:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:49.516 06:46:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:49.516 06:46:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:49.516 06:46:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:49.516 06:46:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:49.516 06:46:03 -- scripts/common.sh@344 -- # : 1 00:15:49.516 06:46:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:49.516 06:46:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.516 06:46:03 -- scripts/common.sh@364 -- # decimal 1 00:15:49.516 06:46:03 -- scripts/common.sh@352 -- # local d=1 00:15:49.516 06:46:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.516 06:46:03 -- scripts/common.sh@354 -- # echo 1 00:15:49.516 06:46:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:49.516 06:46:03 -- scripts/common.sh@365 -- # decimal 2 00:15:49.516 06:46:03 -- scripts/common.sh@352 -- # local d=2 00:15:49.516 06:46:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.516 06:46:03 -- scripts/common.sh@354 -- # echo 2 00:15:49.516 06:46:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:49.516 06:46:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:49.516 06:46:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:49.516 06:46:03 -- scripts/common.sh@367 -- # return 0 00:15:49.516 06:46:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.516 --rc genhtml_branch_coverage=1 00:15:49.516 --rc genhtml_function_coverage=1 00:15:49.516 --rc genhtml_legend=1 00:15:49.516 --rc geninfo_all_blocks=1 00:15:49.516 --rc geninfo_unexecuted_blocks=1 00:15:49.516 00:15:49.516 ' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.516 --rc genhtml_branch_coverage=1 00:15:49.516 --rc genhtml_function_coverage=1 00:15:49.516 --rc genhtml_legend=1 00:15:49.516 --rc geninfo_all_blocks=1 00:15:49.516 --rc geninfo_unexecuted_blocks=1 00:15:49.516 00:15:49.516 ' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.516 --rc genhtml_branch_coverage=1 00:15:49.516 --rc genhtml_function_coverage=1 00:15:49.516 --rc genhtml_legend=1 00:15:49.516 --rc geninfo_all_blocks=1 00:15:49.516 --rc geninfo_unexecuted_blocks=1 00:15:49.516 00:15:49.516 ' 00:15:49.516 06:46:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.516 --rc genhtml_branch_coverage=1 00:15:49.516 --rc genhtml_function_coverage=1 00:15:49.516 --rc genhtml_legend=1 00:15:49.516 --rc geninfo_all_blocks=1 00:15:49.516 --rc geninfo_unexecuted_blocks=1 00:15:49.516 00:15:49.516 ' 00:15:49.516 06:46:03 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.516 06:46:03 -- nvmf/common.sh@7 -- # uname -s 00:15:49.516 06:46:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.516 06:46:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.516 06:46:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.516 06:46:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.516 06:46:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.516 06:46:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.516 06:46:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.516 06:46:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.516 06:46:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.516 06:46:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.516 06:46:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:49.516 06:46:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:49.516 06:46:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.516 06:46:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.516 06:46:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.516 06:46:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.516 06:46:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.516 06:46:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.516 06:46:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.516 06:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.516 06:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.516 06:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.516 06:46:03 -- paths/export.sh@5 -- # export PATH 00:15:49.516 06:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.516 06:46:03 -- nvmf/common.sh@46 -- # : 0 00:15:49.516 06:46:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:49.516 06:46:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:49.516 06:46:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:49.516 06:46:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.516 06:46:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.516 06:46:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:49.516 06:46:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:49.516 06:46:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:49.516 06:46:03 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.516 06:46:03 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.516 06:46:03 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:49.516 06:46:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:49.516 06:46:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.516 06:46:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:49.516 06:46:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:49.516 06:46:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:49.516 06:46:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.516 06:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.516 06:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.517 06:46:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:49.517 06:46:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:49.517 06:46:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:49.517 06:46:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:49.517 06:46:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:49.517 06:46:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:49.517 06:46:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.517 06:46:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.517 06:46:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:49.517 06:46:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:49.517 06:46:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.517 06:46:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.517 06:46:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.517 06:46:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.517 06:46:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.517 06:46:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.517 06:46:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.517 06:46:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.517 06:46:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:49.517 06:46:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:49.517 Cannot find device "nvmf_tgt_br" 00:15:49.517 06:46:03 -- nvmf/common.sh@154 -- # true 00:15:49.517 06:46:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.517 Cannot find device "nvmf_tgt_br2" 00:15:49.517 06:46:03 -- nvmf/common.sh@155 -- # true 00:15:49.517 06:46:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:49.775 06:46:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:49.775 Cannot find device "nvmf_tgt_br" 00:15:49.775 06:46:03 -- nvmf/common.sh@157 -- # true 00:15:49.775 06:46:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:49.775 Cannot find device "nvmf_tgt_br2" 00:15:49.775 06:46:03 -- nvmf/common.sh@158 -- # true 00:15:49.775 06:46:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:49.775 06:46:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:49.775 06:46:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.775 06:46:03 -- nvmf/common.sh@161 -- # true 00:15:49.775 06:46:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.775 06:46:03 -- nvmf/common.sh@162 -- # true 00:15:49.775 06:46:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.776 06:46:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.776 06:46:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.776 06:46:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.776 06:46:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.776 06:46:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.776 06:46:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.776 06:46:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.776 06:46:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.776 06:46:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:49.776 06:46:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:49.776 06:46:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:49.776 06:46:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:49.776 06:46:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.776 06:46:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.776 06:46:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.776 06:46:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:49.776 06:46:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:49.776 06:46:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.776 06:46:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.776 06:46:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.776 06:46:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.776 06:46:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.776 06:46:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:49.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:49.776 00:15:49.776 --- 10.0.0.2 ping statistics --- 00:15:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.776 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:49.776 06:46:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:49.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:49.776 00:15:49.776 --- 10.0.0.3 ping statistics --- 00:15:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.776 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:49.776 06:46:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:49.776 00:15:49.776 --- 10.0.0.1 ping statistics --- 00:15:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.776 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:49.776 06:46:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.776 06:46:03 -- nvmf/common.sh@421 -- # return 0 00:15:49.776 06:46:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:49.776 06:46:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.776 06:46:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:49.776 06:46:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:49.776 06:46:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.776 06:46:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:49.776 06:46:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.034 06:46:03 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:50.035 06:46:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.035 06:46:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.035 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:15:50.035 06:46:03 -- nvmf/common.sh@469 -- # nvmfpid=74317 00:15:50.035 06:46:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:50.035 06:46:03 -- nvmf/common.sh@470 -- # waitforlisten 74317 00:15:50.035 06:46:03 -- common/autotest_common.sh@829 -- # '[' -z 74317 ']' 00:15:50.035 06:46:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.035 06:46:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.035 06:46:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.035 06:46:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.035 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:15:50.035 [2024-12-14 06:46:03.854762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:50.035 [2024-12-14 06:46:03.854865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.035 [2024-12-14 06:46:03.991491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.294 [2024-12-14 06:46:04.093510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.294 [2024-12-14 06:46:04.093653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.294 [2024-12-14 06:46:04.093666] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.294 [2024-12-14 06:46:04.093674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.294 [2024-12-14 06:46:04.093876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.294 [2024-12-14 06:46:04.094532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.294 [2024-12-14 06:46:04.094836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.294 [2024-12-14 06:46:04.095578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.234 06:46:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.234 06:46:04 -- common/autotest_common.sh@862 -- # return 0 00:15:51.234 06:46:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:51.234 06:46:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.234 06:46:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:46:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.234 06:46:04 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:51.234 06:46:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:46:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:04 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:51.234 06:46:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.234 06:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 [2024-12-14 06:46:05.020175] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.234 06:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 Malloc0 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.234 06:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.234 06:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.234 06:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.234 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 [2024-12-14 06:46:05.081792] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.234 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74376 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@30 -- # READ_PID=74378 00:15:51.234 06:46:05 -- nvmf/common.sh@520 -- # config=() 00:15:51.234 06:46:05 -- nvmf/common.sh@520 -- # local subsystem config 00:15:51.234 06:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:51.234 06:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:51.234 { 00:15:51.234 "params": { 00:15:51.234 "name": "Nvme$subsystem", 00:15:51.234 "trtype": "$TEST_TRANSPORT", 00:15:51.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.234 "adrfam": "ipv4", 00:15:51.234 "trsvcid": "$NVMF_PORT", 00:15:51.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.234 "hdgst": ${hdgst:-false}, 00:15:51.234 "ddgst": ${ddgst:-false} 00:15:51.234 }, 00:15:51.234 "method": "bdev_nvme_attach_controller" 00:15:51.234 } 00:15:51.234 EOF 00:15:51.234 )") 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:51.234 06:46:05 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:51.234 06:46:05 -- nvmf/common.sh@520 -- # config=() 00:15:51.234 06:46:05 -- nvmf/common.sh@520 -- # local subsystem config 00:15:51.234 06:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:51.234 06:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:51.234 { 00:15:51.234 "params": { 00:15:51.234 "name": "Nvme$subsystem", 00:15:51.234 "trtype": "$TEST_TRANSPORT", 00:15:51.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "$NVMF_PORT", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.235 "hdgst": ${hdgst:-false}, 00:15:51.235 "ddgst": ${ddgst:-false} 00:15:51.235 }, 00:15:51.235 "method": "bdev_nvme_attach_controller" 00:15:51.235 } 00:15:51.235 EOF 00:15:51.235 )") 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # cat 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # cat 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74380 00:15:51.235 06:46:05 -- nvmf/common.sh@520 -- # config=() 00:15:51.235 06:46:05 -- nvmf/common.sh@520 -- # local subsystem config 00:15:51.235 06:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:51.235 { 00:15:51.235 "params": { 00:15:51.235 "name": "Nvme$subsystem", 00:15:51.235 "trtype": "$TEST_TRANSPORT", 00:15:51.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "$NVMF_PORT", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.235 "hdgst": ${hdgst:-false}, 00:15:51.235 "ddgst": ${ddgst:-false} 00:15:51.235 }, 00:15:51.235 "method": "bdev_nvme_attach_controller" 00:15:51.235 } 00:15:51.235 EOF 00:15:51.235 )") 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74386 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@35 -- # sync 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:51.235 06:46:05 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:51.235 06:46:05 -- nvmf/common.sh@520 -- # config=() 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # cat 00:15:51.235 06:46:05 -- nvmf/common.sh@520 -- # local subsystem config 00:15:51.235 06:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:51.235 { 00:15:51.235 "params": { 00:15:51.235 "name": "Nvme$subsystem", 00:15:51.235 "trtype": "$TEST_TRANSPORT", 00:15:51.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "$NVMF_PORT", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.235 "hdgst": ${hdgst:-false}, 00:15:51.235 "ddgst": ${ddgst:-false} 00:15:51.235 }, 00:15:51.235 "method": "bdev_nvme_attach_controller" 00:15:51.235 } 00:15:51.235 EOF 00:15:51.235 )") 00:15:51.235 06:46:05 -- nvmf/common.sh@544 -- # jq . 00:15:51.235 06:46:05 -- nvmf/common.sh@544 -- # jq . 00:15:51.235 06:46:05 -- nvmf/common.sh@542 -- # cat 00:15:51.235 06:46:05 -- nvmf/common.sh@545 -- # IFS=, 00:15:51.235 06:46:05 -- nvmf/common.sh@544 -- # jq . 00:15:51.235 06:46:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:51.235 "params": { 00:15:51.235 "name": "Nvme1", 00:15:51.235 "trtype": "tcp", 00:15:51.235 "traddr": "10.0.0.2", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "4420", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.235 "hdgst": false, 00:15:51.235 "ddgst": false 00:15:51.235 }, 00:15:51.235 "method": "bdev_nvme_attach_controller" 00:15:51.235 }' 00:15:51.235 06:46:05 -- nvmf/common.sh@545 -- # IFS=, 00:15:51.235 06:46:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:51.235 "params": { 00:15:51.235 "name": "Nvme1", 00:15:51.235 "trtype": "tcp", 00:15:51.235 "traddr": "10.0.0.2", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "4420", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.235 "hdgst": false, 00:15:51.235 "ddgst": false 00:15:51.235 }, 00:15:51.235 "method": "bdev_nvme_attach_controller" 00:15:51.235 }' 00:15:51.235 06:46:05 -- nvmf/common.sh@545 -- # IFS=, 00:15:51.235 06:46:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:51.235 "params": { 00:15:51.235 "name": "Nvme1", 00:15:51.235 "trtype": "tcp", 00:15:51.235 "traddr": "10.0.0.2", 00:15:51.235 "adrfam": "ipv4", 00:15:51.235 "trsvcid": "4420", 00:15:51.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.235 "hdgst": false, 00:15:51.235 "ddgst": false 00:15:51.235 }, 00:15:51.236 "method": "bdev_nvme_attach_controller" 00:15:51.236 }' 00:15:51.236 06:46:05 -- nvmf/common.sh@544 -- # jq . 00:15:51.236 06:46:05 -- nvmf/common.sh@545 -- # IFS=, 00:15:51.236 06:46:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:51.236 "params": { 00:15:51.236 "name": "Nvme1", 00:15:51.236 "trtype": "tcp", 00:15:51.236 "traddr": "10.0.0.2", 00:15:51.236 "adrfam": "ipv4", 00:15:51.236 "trsvcid": "4420", 00:15:51.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.236 "hdgst": false, 00:15:51.236 "ddgst": false 00:15:51.236 }, 00:15:51.236 "method": "bdev_nvme_attach_controller" 00:15:51.236 }' 00:15:51.236 [2024-12-14 06:46:05.149631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.236 [2024-12-14 06:46:05.149716] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:51.236 06:46:05 -- target/bdev_io_wait.sh@37 -- # wait 74376 00:15:51.236 [2024-12-14 06:46:05.166635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.236 [2024-12-14 06:46:05.166730] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:51.236 [2024-12-14 06:46:05.170608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.236 [2024-12-14 06:46:05.170667] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:51.236 [2024-12-14 06:46:05.182881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.236 [2024-12-14 06:46:05.183001] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:51.495 [2024-12-14 06:46:05.383215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.495 [2024-12-14 06:46:05.455596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.754 [2024-12-14 06:46:05.499443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:51.754 [2024-12-14 06:46:05.524377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.754 [2024-12-14 06:46:05.573534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:51.754 [2024-12-14 06:46:05.629042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.754 [2024-12-14 06:46:05.638972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:51.754 Running I/O for 1 seconds... 00:15:51.754 [2024-12-14 06:46:05.726036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:52.013 Running I/O for 1 seconds... 00:15:52.013 Running I/O for 1 seconds... 00:15:52.013 Running I/O for 1 seconds... 00:15:52.949 00:15:52.949 Latency(us) 00:15:52.949 [2024-12-14T06:46:06.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.949 [2024-12-14T06:46:06.941Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:52.949 Nvme1n1 : 1.01 10728.24 41.91 0.00 0.00 11878.07 7983.48 19303.33 00:15:52.949 [2024-12-14T06:46:06.941Z] =================================================================================================================== 00:15:52.949 [2024-12-14T06:46:06.941Z] Total : 10728.24 41.91 0.00 0.00 11878.07 7983.48 19303.33 00:15:52.949 00:15:52.950 Latency(us) 00:15:52.950 [2024-12-14T06:46:06.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.950 [2024-12-14T06:46:06.942Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:52.950 Nvme1n1 : 1.00 209923.17 820.01 0.00 0.00 607.20 296.03 737.28 00:15:52.950 [2024-12-14T06:46:06.942Z] =================================================================================================================== 00:15:52.950 [2024-12-14T06:46:06.942Z] Total : 209923.17 820.01 0.00 0.00 607.20 296.03 737.28 00:15:52.950 00:15:52.950 Latency(us) 00:15:52.950 [2024-12-14T06:46:06.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.950 [2024-12-14T06:46:06.942Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:52.950 Nvme1n1 : 1.01 8852.25 34.58 0.00 0.00 14397.92 7477.06 24069.59 00:15:52.950 [2024-12-14T06:46:06.942Z] =================================================================================================================== 00:15:52.950 [2024-12-14T06:46:06.942Z] Total : 8852.25 34.58 0.00 0.00 14397.92 7477.06 24069.59 00:15:52.950 00:15:52.950 Latency(us) 00:15:52.950 [2024-12-14T06:46:06.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.950 [2024-12-14T06:46:06.942Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:52.950 Nvme1n1 : 1.01 9059.15 35.39 0.00 0.00 14076.66 6613.18 24903.68 00:15:52.950 [2024-12-14T06:46:06.942Z] =================================================================================================================== 00:15:52.950 [2024-12-14T06:46:06.942Z] Total : 9059.15 35.39 0.00 0.00 14076.66 6613.18 24903.68 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@38 -- # wait 74378 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@39 -- # wait 74380 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@40 -- # wait 74386 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.518 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.518 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:53.518 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:53.518 06:46:07 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:53.518 06:46:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.518 06:46:07 -- nvmf/common.sh@116 -- # sync 00:15:53.518 06:46:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.518 06:46:07 -- nvmf/common.sh@119 -- # set +e 00:15:53.518 06:46:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.518 06:46:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.518 rmmod nvme_tcp 00:15:53.518 rmmod nvme_fabrics 00:15:53.518 rmmod nvme_keyring 00:15:53.518 06:46:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:53.518 06:46:07 -- nvmf/common.sh@123 -- # set -e 00:15:53.518 06:46:07 -- nvmf/common.sh@124 -- # return 0 00:15:53.518 06:46:07 -- nvmf/common.sh@477 -- # '[' -n 74317 ']' 00:15:53.518 06:46:07 -- nvmf/common.sh@478 -- # killprocess 74317 00:15:53.518 06:46:07 -- common/autotest_common.sh@936 -- # '[' -z 74317 ']' 00:15:53.518 06:46:07 -- common/autotest_common.sh@940 -- # kill -0 74317 00:15:53.518 06:46:07 -- common/autotest_common.sh@941 -- # uname 00:15:53.518 06:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.518 06:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74317 00:15:53.518 06:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.518 06:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.518 killing process with pid 74317 00:15:53.518 06:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74317' 00:15:53.518 06:46:07 -- common/autotest_common.sh@955 -- # kill 74317 00:15:53.518 06:46:07 -- common/autotest_common.sh@960 -- # wait 74317 00:15:53.777 06:46:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:53.777 06:46:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:53.777 06:46:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:53.777 06:46:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.777 06:46:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:53.777 06:46:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.777 06:46:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.777 06:46:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.777 06:46:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:53.777 00:15:53.777 real 0m4.484s 00:15:53.777 user 0m19.513s 00:15:53.777 sys 0m2.401s 00:15:53.777 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.777 ************************************ 00:15:53.777 END TEST nvmf_bdev_io_wait 00:15:53.777 ************************************ 00:15:53.777 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:54.037 06:46:07 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:54.037 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.037 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:54.037 ************************************ 00:15:54.037 START TEST nvmf_queue_depth 00:15:54.037 ************************************ 00:15:54.037 06:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:54.037 * Looking for test storage... 00:15:54.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.037 06:46:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:54.037 06:46:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:54.037 06:46:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:54.037 06:46:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:54.037 06:46:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:54.037 06:46:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:54.037 06:46:07 -- scripts/common.sh@335 -- # IFS=.-: 00:15:54.037 06:46:07 -- scripts/common.sh@335 -- # read -ra ver1 00:15:54.037 06:46:07 -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.037 06:46:07 -- scripts/common.sh@336 -- # read -ra ver2 00:15:54.037 06:46:07 -- scripts/common.sh@337 -- # local 'op=<' 00:15:54.037 06:46:07 -- scripts/common.sh@339 -- # ver1_l=2 00:15:54.037 06:46:07 -- scripts/common.sh@340 -- # ver2_l=1 00:15:54.037 06:46:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:54.037 06:46:07 -- scripts/common.sh@343 -- # case "$op" in 00:15:54.037 06:46:07 -- scripts/common.sh@344 -- # : 1 00:15:54.037 06:46:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:54.037 06:46:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.037 06:46:07 -- scripts/common.sh@364 -- # decimal 1 00:15:54.037 06:46:07 -- scripts/common.sh@352 -- # local d=1 00:15:54.037 06:46:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.037 06:46:07 -- scripts/common.sh@354 -- # echo 1 00:15:54.037 06:46:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:54.037 06:46:07 -- scripts/common.sh@365 -- # decimal 2 00:15:54.037 06:46:07 -- scripts/common.sh@352 -- # local d=2 00:15:54.037 06:46:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.037 06:46:07 -- scripts/common.sh@354 -- # echo 2 00:15:54.037 06:46:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:54.037 06:46:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:54.037 06:46:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:54.037 06:46:07 -- scripts/common.sh@367 -- # return 0 00:15:54.037 06:46:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.037 --rc genhtml_branch_coverage=1 00:15:54.037 --rc genhtml_function_coverage=1 00:15:54.037 --rc genhtml_legend=1 00:15:54.037 --rc geninfo_all_blocks=1 00:15:54.037 --rc geninfo_unexecuted_blocks=1 00:15:54.037 00:15:54.037 ' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.037 --rc genhtml_branch_coverage=1 00:15:54.037 --rc genhtml_function_coverage=1 00:15:54.037 --rc genhtml_legend=1 00:15:54.037 --rc geninfo_all_blocks=1 00:15:54.037 --rc geninfo_unexecuted_blocks=1 00:15:54.037 00:15:54.037 ' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.037 --rc genhtml_branch_coverage=1 00:15:54.037 --rc genhtml_function_coverage=1 00:15:54.037 --rc genhtml_legend=1 00:15:54.037 --rc geninfo_all_blocks=1 00:15:54.037 --rc geninfo_unexecuted_blocks=1 00:15:54.037 00:15:54.037 ' 00:15:54.037 06:46:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.037 --rc genhtml_branch_coverage=1 00:15:54.037 --rc genhtml_function_coverage=1 00:15:54.037 --rc genhtml_legend=1 00:15:54.037 --rc geninfo_all_blocks=1 00:15:54.037 --rc geninfo_unexecuted_blocks=1 00:15:54.037 00:15:54.037 ' 00:15:54.037 06:46:07 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.037 06:46:07 -- nvmf/common.sh@7 -- # uname -s 00:15:54.037 06:46:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.037 06:46:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.037 06:46:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.037 06:46:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.037 06:46:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.037 06:46:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.037 06:46:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.037 06:46:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.037 06:46:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.037 06:46:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.037 06:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:54.037 06:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:15:54.037 06:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.037 06:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.037 06:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.037 06:46:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.037 06:46:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.037 06:46:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.037 06:46:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.038 06:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.038 06:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.038 06:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.038 06:46:08 -- paths/export.sh@5 -- # export PATH 00:15:54.038 06:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.038 06:46:08 -- nvmf/common.sh@46 -- # : 0 00:15:54.038 06:46:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.038 06:46:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.038 06:46:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.038 06:46:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.038 06:46:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.038 06:46:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.038 06:46:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.038 06:46:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.038 06:46:08 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:54.038 06:46:08 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:54.038 06:46:08 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:54.038 06:46:08 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:54.038 06:46:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:54.038 06:46:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.038 06:46:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:54.038 06:46:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:54.038 06:46:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:54.038 06:46:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.038 06:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.038 06:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.038 06:46:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:54.038 06:46:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:54.038 06:46:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:54.038 06:46:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:54.038 06:46:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:54.038 06:46:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:54.038 06:46:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.038 06:46:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.038 06:46:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:54.038 06:46:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:54.038 06:46:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.038 06:46:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.038 06:46:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.038 06:46:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.038 06:46:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.038 06:46:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.038 06:46:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.038 06:46:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.038 06:46:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:54.297 06:46:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:54.297 Cannot find device "nvmf_tgt_br" 00:15:54.297 06:46:08 -- nvmf/common.sh@154 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.297 Cannot find device "nvmf_tgt_br2" 00:15:54.297 06:46:08 -- nvmf/common.sh@155 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:54.297 06:46:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:54.297 Cannot find device "nvmf_tgt_br" 00:15:54.297 06:46:08 -- nvmf/common.sh@157 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:54.297 Cannot find device "nvmf_tgt_br2" 00:15:54.297 06:46:08 -- nvmf/common.sh@158 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:54.297 06:46:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:54.297 06:46:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.297 06:46:08 -- nvmf/common.sh@161 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.297 06:46:08 -- nvmf/common.sh@162 -- # true 00:15:54.297 06:46:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.297 06:46:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.297 06:46:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.297 06:46:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.297 06:46:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.297 06:46:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.297 06:46:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.297 06:46:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.297 06:46:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.297 06:46:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:54.297 06:46:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:54.297 06:46:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:54.297 06:46:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:54.297 06:46:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.297 06:46:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.297 06:46:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.297 06:46:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:54.297 06:46:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:54.556 06:46:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.556 06:46:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.556 06:46:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.556 06:46:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.556 06:46:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.556 06:46:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:54.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:54.556 00:15:54.556 --- 10.0.0.2 ping statistics --- 00:15:54.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.556 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:54.556 06:46:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:54.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:54.556 00:15:54.556 --- 10.0.0.3 ping statistics --- 00:15:54.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.556 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:54.556 06:46:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:54.556 00:15:54.556 --- 10.0.0.1 ping statistics --- 00:15:54.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.556 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:54.556 06:46:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.556 06:46:08 -- nvmf/common.sh@421 -- # return 0 00:15:54.556 06:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:54.556 06:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.556 06:46:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:54.556 06:46:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:54.556 06:46:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.556 06:46:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:54.556 06:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:54.556 06:46:08 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:54.556 06:46:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:54.556 06:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.556 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.556 06:46:08 -- nvmf/common.sh@469 -- # nvmfpid=74625 00:15:54.556 06:46:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.556 06:46:08 -- nvmf/common.sh@470 -- # waitforlisten 74625 00:15:54.556 06:46:08 -- common/autotest_common.sh@829 -- # '[' -z 74625 ']' 00:15:54.556 06:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.556 06:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.556 06:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.556 06:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.556 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.556 [2024-12-14 06:46:08.431104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.556 [2024-12-14 06:46:08.431196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.815 [2024-12-14 06:46:08.573512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.815 [2024-12-14 06:46:08.695324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.815 [2024-12-14 06:46:08.695505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.815 [2024-12-14 06:46:08.695520] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.815 [2024-12-14 06:46:08.695532] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.815 [2024-12-14 06:46:08.695563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.751 06:46:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.751 06:46:09 -- common/autotest_common.sh@862 -- # return 0 00:15:55.751 06:46:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:55.751 06:46:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 06:46:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.751 06:46:09 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.751 06:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 [2024-12-14 06:46:09.509519] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.751 06:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.751 06:46:09 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.751 06:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 Malloc0 00:15:55.751 06:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.751 06:46:09 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.751 06:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 06:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.751 06:46:09 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.751 06:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 06:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.751 06:46:09 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.751 06:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 [2024-12-14 06:46:09.577190] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.751 06:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.751 06:46:09 -- target/queue_depth.sh@30 -- # bdevperf_pid=74676 00:15:55.751 06:46:09 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:55.751 06:46:09 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.751 06:46:09 -- target/queue_depth.sh@33 -- # waitforlisten 74676 /var/tmp/bdevperf.sock 00:15:55.751 06:46:09 -- common/autotest_common.sh@829 -- # '[' -z 74676 ']' 00:15:55.751 06:46:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.751 06:46:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.751 06:46:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.751 06:46:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.751 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 [2024-12-14 06:46:09.630854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:55.751 [2024-12-14 06:46:09.630972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:15:56.010 [2024-12-14 06:46:09.765878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.010 [2024-12-14 06:46:09.891699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.945 06:46:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.945 06:46:10 -- common/autotest_common.sh@862 -- # return 0 00:15:56.945 06:46:10 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.945 06:46:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.945 06:46:10 -- common/autotest_common.sh@10 -- # set +x 00:15:56.945 NVMe0n1 00:15:56.945 06:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.945 06:46:10 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.945 Running I/O for 10 seconds... 00:16:06.917 00:16:06.917 Latency(us) 00:16:06.917 [2024-12-14T06:46:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.917 [2024-12-14T06:46:20.909Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:06.917 Verification LBA range: start 0x0 length 0x4000 00:16:06.917 NVMe0n1 : 10.05 16505.56 64.47 0.00 0.00 61839.94 12094.37 50283.99 00:16:06.917 [2024-12-14T06:46:20.909Z] =================================================================================================================== 00:16:06.917 [2024-12-14T06:46:20.909Z] Total : 16505.56 64.47 0.00 0.00 61839.94 12094.37 50283.99 00:16:06.917 0 00:16:06.917 06:46:20 -- target/queue_depth.sh@39 -- # killprocess 74676 00:16:06.917 06:46:20 -- common/autotest_common.sh@936 -- # '[' -z 74676 ']' 00:16:06.917 06:46:20 -- common/autotest_common.sh@940 -- # kill -0 74676 00:16:06.917 06:46:20 -- common/autotest_common.sh@941 -- # uname 00:16:06.917 06:46:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.917 06:46:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74676 00:16:06.917 06:46:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.917 06:46:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.917 killing process with pid 74676 00:16:06.917 06:46:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74676' 00:16:06.917 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.917 00:16:06.917 Latency(us) 00:16:06.917 [2024-12-14T06:46:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.917 [2024-12-14T06:46:20.909Z] =================================================================================================================== 00:16:06.917 [2024-12-14T06:46:20.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.917 06:46:20 -- common/autotest_common.sh@955 -- # kill 74676 00:16:06.917 06:46:20 -- common/autotest_common.sh@960 -- # wait 74676 00:16:07.175 06:46:21 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:07.175 06:46:21 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:07.175 06:46:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:07.175 06:46:21 -- nvmf/common.sh@116 -- # sync 00:16:07.433 06:46:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:07.433 06:46:21 -- nvmf/common.sh@119 -- # set +e 00:16:07.433 06:46:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:07.433 06:46:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:07.433 rmmod nvme_tcp 00:16:07.433 rmmod nvme_fabrics 00:16:07.433 rmmod nvme_keyring 00:16:07.433 06:46:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:07.433 06:46:21 -- nvmf/common.sh@123 -- # set -e 00:16:07.433 06:46:21 -- nvmf/common.sh@124 -- # return 0 00:16:07.433 06:46:21 -- nvmf/common.sh@477 -- # '[' -n 74625 ']' 00:16:07.433 06:46:21 -- nvmf/common.sh@478 -- # killprocess 74625 00:16:07.433 06:46:21 -- common/autotest_common.sh@936 -- # '[' -z 74625 ']' 00:16:07.433 06:46:21 -- common/autotest_common.sh@940 -- # kill -0 74625 00:16:07.433 06:46:21 -- common/autotest_common.sh@941 -- # uname 00:16:07.433 06:46:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:07.433 06:46:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74625 00:16:07.433 06:46:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:07.433 06:46:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:07.433 killing process with pid 74625 00:16:07.433 06:46:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74625' 00:16:07.433 06:46:21 -- common/autotest_common.sh@955 -- # kill 74625 00:16:07.433 06:46:21 -- common/autotest_common.sh@960 -- # wait 74625 00:16:07.691 06:46:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:07.691 06:46:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:07.691 06:46:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:07.691 06:46:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.691 06:46:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:07.691 06:46:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.691 06:46:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.691 06:46:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.949 06:46:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:07.949 00:16:07.949 real 0m13.883s 00:16:07.949 user 0m23.362s 00:16:07.949 sys 0m2.321s 00:16:07.949 06:46:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:07.949 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:07.949 ************************************ 00:16:07.949 END TEST nvmf_queue_depth 00:16:07.949 ************************************ 00:16:07.949 06:46:21 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:07.949 06:46:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:07.949 06:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.949 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:07.949 ************************************ 00:16:07.949 START TEST nvmf_multipath 00:16:07.949 ************************************ 00:16:07.950 06:46:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:07.950 * Looking for test storage... 00:16:07.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.950 06:46:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:07.950 06:46:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:07.950 06:46:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:07.950 06:46:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:07.950 06:46:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:07.950 06:46:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:07.950 06:46:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:07.950 06:46:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:07.950 06:46:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:07.950 06:46:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.950 06:46:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:07.950 06:46:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:07.950 06:46:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:07.950 06:46:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:07.950 06:46:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:07.950 06:46:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:07.950 06:46:21 -- scripts/common.sh@344 -- # : 1 00:16:07.950 06:46:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:07.950 06:46:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.950 06:46:21 -- scripts/common.sh@364 -- # decimal 1 00:16:07.950 06:46:21 -- scripts/common.sh@352 -- # local d=1 00:16:07.950 06:46:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.950 06:46:21 -- scripts/common.sh@354 -- # echo 1 00:16:07.950 06:46:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:07.950 06:46:21 -- scripts/common.sh@365 -- # decimal 2 00:16:07.950 06:46:21 -- scripts/common.sh@352 -- # local d=2 00:16:07.950 06:46:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.950 06:46:21 -- scripts/common.sh@354 -- # echo 2 00:16:07.950 06:46:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:07.950 06:46:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:07.950 06:46:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:07.950 06:46:21 -- scripts/common.sh@367 -- # return 0 00:16:07.950 06:46:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.950 06:46:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.950 --rc genhtml_branch_coverage=1 00:16:07.950 --rc genhtml_function_coverage=1 00:16:07.950 --rc genhtml_legend=1 00:16:07.950 --rc geninfo_all_blocks=1 00:16:07.950 --rc geninfo_unexecuted_blocks=1 00:16:07.950 00:16:07.950 ' 00:16:07.950 06:46:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.950 --rc genhtml_branch_coverage=1 00:16:07.950 --rc genhtml_function_coverage=1 00:16:07.950 --rc genhtml_legend=1 00:16:07.950 --rc geninfo_all_blocks=1 00:16:07.950 --rc geninfo_unexecuted_blocks=1 00:16:07.950 00:16:07.950 ' 00:16:07.950 06:46:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.950 --rc genhtml_branch_coverage=1 00:16:07.950 --rc genhtml_function_coverage=1 00:16:07.950 --rc genhtml_legend=1 00:16:07.950 --rc geninfo_all_blocks=1 00:16:07.950 --rc geninfo_unexecuted_blocks=1 00:16:07.950 00:16:07.950 ' 00:16:07.950 06:46:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.950 --rc genhtml_branch_coverage=1 00:16:07.950 --rc genhtml_function_coverage=1 00:16:07.950 --rc genhtml_legend=1 00:16:07.950 --rc geninfo_all_blocks=1 00:16:07.950 --rc geninfo_unexecuted_blocks=1 00:16:07.950 00:16:07.950 ' 00:16:07.950 06:46:21 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.950 06:46:21 -- nvmf/common.sh@7 -- # uname -s 00:16:07.950 06:46:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.950 06:46:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.950 06:46:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.950 06:46:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.950 06:46:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.950 06:46:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.950 06:46:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.950 06:46:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.950 06:46:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.950 06:46:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.950 06:46:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:07.950 06:46:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:07.950 06:46:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.950 06:46:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.950 06:46:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.950 06:46:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.950 06:46:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.950 06:46:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.950 06:46:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.950 06:46:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.950 06:46:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.950 06:46:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.950 06:46:21 -- paths/export.sh@5 -- # export PATH 00:16:07.950 06:46:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.950 06:46:21 -- nvmf/common.sh@46 -- # : 0 00:16:07.950 06:46:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.950 06:46:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.950 06:46:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.950 06:46:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.950 06:46:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.950 06:46:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.950 06:46:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.950 06:46:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.209 06:46:21 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.209 06:46:21 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.209 06:46:21 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:08.209 06:46:21 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.209 06:46:21 -- target/multipath.sh@43 -- # nvmftestinit 00:16:08.209 06:46:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.209 06:46:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.209 06:46:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.209 06:46:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.209 06:46:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.209 06:46:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.209 06:46:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.209 06:46:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.209 06:46:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:08.209 06:46:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:08.209 06:46:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:08.209 06:46:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:08.209 06:46:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:08.209 06:46:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:08.209 06:46:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.209 06:46:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.209 06:46:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.209 06:46:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:08.209 06:46:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.209 06:46:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.209 06:46:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.209 06:46:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.209 06:46:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.209 06:46:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.209 06:46:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.209 06:46:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.209 06:46:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:08.209 06:46:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:08.209 Cannot find device "nvmf_tgt_br" 00:16:08.209 06:46:21 -- nvmf/common.sh@154 -- # true 00:16:08.209 06:46:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.209 Cannot find device "nvmf_tgt_br2" 00:16:08.209 06:46:21 -- nvmf/common.sh@155 -- # true 00:16:08.209 06:46:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:08.209 06:46:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:08.209 Cannot find device "nvmf_tgt_br" 00:16:08.209 06:46:22 -- nvmf/common.sh@157 -- # true 00:16:08.209 06:46:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:08.209 Cannot find device "nvmf_tgt_br2" 00:16:08.209 06:46:22 -- nvmf/common.sh@158 -- # true 00:16:08.209 06:46:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:08.209 06:46:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:08.209 06:46:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.209 06:46:22 -- nvmf/common.sh@161 -- # true 00:16:08.209 06:46:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.209 06:46:22 -- nvmf/common.sh@162 -- # true 00:16:08.209 06:46:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.209 06:46:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.209 06:46:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.209 06:46:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.209 06:46:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.209 06:46:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.209 06:46:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.209 06:46:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.209 06:46:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.209 06:46:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:08.209 06:46:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:08.209 06:46:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:08.209 06:46:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:08.209 06:46:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.209 06:46:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.209 06:46:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.467 06:46:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:08.467 06:46:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:08.467 06:46:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.467 06:46:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.467 06:46:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.467 06:46:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.467 06:46:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.467 06:46:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:08.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:16:08.467 00:16:08.467 --- 10.0.0.2 ping statistics --- 00:16:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.468 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:16:08.468 06:46:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:08.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:08.468 00:16:08.468 --- 10.0.0.3 ping statistics --- 00:16:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.468 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:08.468 06:46:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:08.468 00:16:08.468 --- 10.0.0.1 ping statistics --- 00:16:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.468 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:08.468 06:46:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.468 06:46:22 -- nvmf/common.sh@421 -- # return 0 00:16:08.468 06:46:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:08.468 06:46:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.468 06:46:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:08.468 06:46:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:08.468 06:46:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.468 06:46:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:08.468 06:46:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:08.468 06:46:22 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:16:08.468 06:46:22 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:16:08.468 06:46:22 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:16:08.468 06:46:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.468 06:46:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.468 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:16:08.468 06:46:22 -- nvmf/common.sh@469 -- # nvmfpid=75015 00:16:08.468 06:46:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.468 06:46:22 -- nvmf/common.sh@470 -- # waitforlisten 75015 00:16:08.468 06:46:22 -- common/autotest_common.sh@829 -- # '[' -z 75015 ']' 00:16:08.468 06:46:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.468 06:46:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.468 06:46:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.468 06:46:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.468 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:16:08.468 [2024-12-14 06:46:22.351126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:08.468 [2024-12-14 06:46:22.351236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.726 [2024-12-14 06:46:22.492418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.726 [2024-12-14 06:46:22.581418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.726 [2024-12-14 06:46:22.581558] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.726 [2024-12-14 06:46:22.581571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.726 [2024-12-14 06:46:22.581579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.726 [2024-12-14 06:46:22.581721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.726 [2024-12-14 06:46:22.582175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.726 [2024-12-14 06:46:22.582324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.726 [2024-12-14 06:46:22.582327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.661 06:46:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.661 06:46:23 -- common/autotest_common.sh@862 -- # return 0 00:16:09.661 06:46:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.661 06:46:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.661 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.661 06:46:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.661 06:46:23 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.661 [2024-12-14 06:46:23.561622] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.661 06:46:23 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:09.919 Malloc0 00:16:10.178 06:46:23 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:16:10.178 06:46:24 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.744 06:46:24 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.744 [2024-12-14 06:46:24.646074] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.744 06:46:24 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:11.002 [2024-12-14 06:46:24.882300] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:11.002 06:46:24 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:16:11.264 06:46:25 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:16:11.523 06:46:25 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.523 06:46:25 -- common/autotest_common.sh@1187 -- # local i=0 00:16:11.523 06:46:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.523 06:46:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:11.523 06:46:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:13.425 06:46:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:13.425 06:46:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:13.425 06:46:27 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.425 06:46:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:13.425 06:46:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.425 06:46:27 -- common/autotest_common.sh@1197 -- # return 0 00:16:13.425 06:46:27 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:16:13.425 06:46:27 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:16:13.425 06:46:27 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:16:13.425 06:46:27 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:13.425 06:46:27 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:16:13.425 06:46:27 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:16:13.425 06:46:27 -- target/multipath.sh@38 -- # return 0 00:16:13.425 06:46:27 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:16:13.425 06:46:27 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:16:13.425 06:46:27 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:16:13.425 06:46:27 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:16:13.425 06:46:27 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:16:13.425 06:46:27 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:16:13.425 06:46:27 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:16:13.425 06:46:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:13.425 06:46:27 -- target/multipath.sh@22 -- # local timeout=20 00:16:13.425 06:46:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:13.425 06:46:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:13.426 06:46:27 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:13.426 06:46:27 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:16:13.426 06:46:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:13.426 06:46:27 -- target/multipath.sh@22 -- # local timeout=20 00:16:13.426 06:46:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:13.426 06:46:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:13.426 06:46:27 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:13.426 06:46:27 -- target/multipath.sh@85 -- # echo numa 00:16:13.426 06:46:27 -- target/multipath.sh@88 -- # fio_pid=75158 00:16:13.426 06:46:27 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:13.426 06:46:27 -- target/multipath.sh@90 -- # sleep 1 00:16:13.426 [global] 00:16:13.426 thread=1 00:16:13.426 invalidate=1 00:16:13.426 rw=randrw 00:16:13.426 time_based=1 00:16:13.426 runtime=6 00:16:13.426 ioengine=libaio 00:16:13.426 direct=1 00:16:13.426 bs=4096 00:16:13.426 iodepth=128 00:16:13.426 norandommap=0 00:16:13.426 numjobs=1 00:16:13.426 00:16:13.426 verify_dump=1 00:16:13.426 verify_backlog=512 00:16:13.426 verify_state_save=0 00:16:13.426 do_verify=1 00:16:13.426 verify=crc32c-intel 00:16:13.426 [job0] 00:16:13.426 filename=/dev/nvme0n1 00:16:13.426 Could not set queue depth (nvme0n1) 00:16:13.684 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.684 fio-3.35 00:16:13.684 Starting 1 thread 00:16:14.620 06:46:28 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:14.620 06:46:28 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:14.878 06:46:28 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:14.878 06:46:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:14.878 06:46:28 -- target/multipath.sh@22 -- # local timeout=20 00:16:14.878 06:46:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:14.878 06:46:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:14.878 06:46:28 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:14.878 06:46:28 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:14.878 06:46:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:14.878 06:46:28 -- target/multipath.sh@22 -- # local timeout=20 00:16:14.878 06:46:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:14.878 06:46:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:14.878 06:46:28 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:14.878 06:46:28 -- target/multipath.sh@25 -- # sleep 1s 00:16:16.255 06:46:29 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:16.255 06:46:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:16.255 06:46:29 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:16.255 06:46:29 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:16.255 06:46:30 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:16.513 06:46:30 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:16.513 06:46:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:16.513 06:46:30 -- target/multipath.sh@22 -- # local timeout=20 00:16:16.513 06:46:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:16.513 06:46:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:16.513 06:46:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:16.513 06:46:30 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:16.513 06:46:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:16.513 06:46:30 -- target/multipath.sh@22 -- # local timeout=20 00:16:16.513 06:46:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:16.513 06:46:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:16.513 06:46:30 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:16.513 06:46:30 -- target/multipath.sh@25 -- # sleep 1s 00:16:17.449 06:46:31 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:17.449 06:46:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:17.449 06:46:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:17.449 06:46:31 -- target/multipath.sh@104 -- # wait 75158 00:16:20.035 00:16:20.035 job0: (groupid=0, jobs=1): err= 0: pid=75180: Sat Dec 14 06:46:33 2024 00:16:20.035 read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(295MiB/6003msec) 00:16:20.035 slat (usec): min=2, max=5296, avg=44.74, stdev=203.44 00:16:20.035 clat (usec): min=962, max=13174, avg=6983.98, stdev=1098.31 00:16:20.035 lat (usec): min=1484, max=13183, avg=7028.72, stdev=1105.67 00:16:20.035 clat percentiles (usec): 00:16:20.035 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6128], 00:16:20.035 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7177], 00:16:20.035 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8225], 95.00th=[ 8848], 00:16:20.035 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11469], 99.95th=[12256], 00:16:20.035 | 99.99th=[13173] 00:16:20.035 bw ( KiB/s): min=11752, max=32384, per=52.94%, avg=26656.00, stdev=6919.37, samples=11 00:16:20.035 iops : min= 2938, max= 8096, avg=6664.00, stdev=1729.84, samples=11 00:16:20.035 write: IOPS=7399, BW=28.9MiB/s (30.3MB/s)(150MiB/5189msec); 0 zone resets 00:16:20.035 slat (usec): min=4, max=5359, avg=58.15, stdev=141.09 00:16:20.035 clat (usec): min=667, max=12773, avg=6076.20, stdev=909.03 00:16:20.035 lat (usec): min=697, max=13059, avg=6134.35, stdev=910.83 00:16:20.035 clat percentiles (usec): 00:16:20.035 | 1.00th=[ 3490], 5.00th=[ 4424], 10.00th=[ 5080], 20.00th=[ 5538], 00:16:20.035 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6259], 00:16:20.035 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6980], 95.00th=[ 7242], 00:16:20.035 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[11207], 00:16:20.035 | 99.99th=[12256] 00:16:20.035 bw ( KiB/s): min=12288, max=32912, per=89.97%, avg=26629.82, stdev=6674.93, samples=11 00:16:20.035 iops : min= 3072, max= 8228, avg=6657.45, stdev=1668.73, samples=11 00:16:20.035 lat (usec) : 750=0.01%, 1000=0.01% 00:16:20.035 lat (msec) : 2=0.03%, 4=1.37%, 10=97.35%, 20=1.24% 00:16:20.035 cpu : usr=6.75%, sys=24.63%, ctx=7155, majf=0, minf=125 00:16:20.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:20.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.035 issued rwts: total=75560,38394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.035 00:16:20.035 Run status group 0 (all jobs): 00:16:20.035 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=295MiB (309MB), run=6003-6003msec 00:16:20.035 WRITE: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=150MiB (157MB), run=5189-5189msec 00:16:20.036 00:16:20.036 Disk stats (read/write): 00:16:20.036 nvme0n1: ios=73724/38394, merge=0/0, ticks=477989/214483, in_queue=692472, util=98.62% 00:16:20.036 06:46:33 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:20.036 06:46:33 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:20.294 06:46:34 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:20.294 06:46:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:20.294 06:46:34 -- target/multipath.sh@22 -- # local timeout=20 00:16:20.294 06:46:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:20.294 06:46:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:20.294 06:46:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:20.294 06:46:34 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:20.294 06:46:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:20.294 06:46:34 -- target/multipath.sh@22 -- # local timeout=20 00:16:20.294 06:46:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:20.294 06:46:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:20.294 06:46:34 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:16:20.294 06:46:34 -- target/multipath.sh@25 -- # sleep 1s 00:16:21.675 06:46:35 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:21.675 06:46:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:21.675 06:46:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:21.675 06:46:35 -- target/multipath.sh@113 -- # echo round-robin 00:16:21.675 06:46:35 -- target/multipath.sh@116 -- # fio_pid=75304 00:16:21.675 06:46:35 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:21.675 06:46:35 -- target/multipath.sh@118 -- # sleep 1 00:16:21.675 [global] 00:16:21.675 thread=1 00:16:21.675 invalidate=1 00:16:21.675 rw=randrw 00:16:21.675 time_based=1 00:16:21.675 runtime=6 00:16:21.675 ioengine=libaio 00:16:21.675 direct=1 00:16:21.675 bs=4096 00:16:21.675 iodepth=128 00:16:21.675 norandommap=0 00:16:21.675 numjobs=1 00:16:21.675 00:16:21.675 verify_dump=1 00:16:21.675 verify_backlog=512 00:16:21.675 verify_state_save=0 00:16:21.675 do_verify=1 00:16:21.675 verify=crc32c-intel 00:16:21.675 [job0] 00:16:21.675 filename=/dev/nvme0n1 00:16:21.675 Could not set queue depth (nvme0n1) 00:16:21.675 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.675 fio-3.35 00:16:21.675 Starting 1 thread 00:16:22.611 06:46:36 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:22.611 06:46:36 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:22.870 06:46:36 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:22.870 06:46:36 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:22.870 06:46:36 -- target/multipath.sh@22 -- # local timeout=20 00:16:22.870 06:46:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:22.870 06:46:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:22.870 06:46:36 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:22.870 06:46:36 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:22.870 06:46:36 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:22.870 06:46:36 -- target/multipath.sh@22 -- # local timeout=20 00:16:22.870 06:46:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:22.870 06:46:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:22.870 06:46:36 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:22.870 06:46:36 -- target/multipath.sh@25 -- # sleep 1s 00:16:24.246 06:46:37 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:24.246 06:46:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:24.246 06:46:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:24.246 06:46:37 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:24.246 06:46:38 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:24.505 06:46:38 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:24.505 06:46:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:24.505 06:46:38 -- target/multipath.sh@22 -- # local timeout=20 00:16:24.505 06:46:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:24.505 06:46:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:24.505 06:46:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:24.505 06:46:38 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:24.505 06:46:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:24.505 06:46:38 -- target/multipath.sh@22 -- # local timeout=20 00:16:24.505 06:46:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:24.505 06:46:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:24.505 06:46:38 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:24.505 06:46:38 -- target/multipath.sh@25 -- # sleep 1s 00:16:25.882 06:46:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:25.882 06:46:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:25.882 06:46:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:25.882 06:46:39 -- target/multipath.sh@132 -- # wait 75304 00:16:27.785 00:16:27.785 job0: (groupid=0, jobs=1): err= 0: pid=75329: Sat Dec 14 06:46:41 2024 00:16:27.785 read: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(320MiB/6000msec) 00:16:27.785 slat (usec): min=3, max=5216, avg=37.05, stdev=178.51 00:16:27.785 clat (usec): min=494, max=15482, avg=6496.86, stdev=1458.06 00:16:27.785 lat (usec): min=528, max=15513, avg=6533.91, stdev=1470.58 00:16:27.785 clat percentiles (usec): 00:16:27.785 | 1.00th=[ 3163], 5.00th=[ 4015], 10.00th=[ 4555], 20.00th=[ 5276], 00:16:27.785 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6849], 00:16:27.785 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8160], 95.00th=[ 8848], 00:16:27.785 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11731], 99.95th=[13566], 00:16:27.785 | 99.99th=[14746] 00:16:27.785 bw ( KiB/s): min=12736, max=44336, per=51.02%, avg=27866.00, stdev=9837.25, samples=11 00:16:27.785 iops : min= 3184, max=11084, avg=6966.45, stdev=2459.27, samples=11 00:16:27.785 write: IOPS=8068, BW=31.5MiB/s (33.0MB/s)(166MiB/5268msec); 0 zone resets 00:16:27.785 slat (usec): min=8, max=2577, avg=49.05, stdev=112.67 00:16:27.785 clat (usec): min=361, max=14388, avg=5386.23, stdev=1458.51 00:16:27.785 lat (usec): min=468, max=14432, avg=5435.29, stdev=1468.88 00:16:27.785 clat percentiles (usec): 00:16:27.785 | 1.00th=[ 2442], 5.00th=[ 2999], 10.00th=[ 3359], 20.00th=[ 3916], 00:16:27.785 | 30.00th=[ 4490], 40.00th=[ 5211], 50.00th=[ 5669], 60.00th=[ 5997], 00:16:27.785 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 6980], 95.00th=[ 7373], 00:16:27.785 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11207], 99.95th=[11863], 00:16:27.785 | 99.99th=[13829] 00:16:27.785 bw ( KiB/s): min=13040, max=43784, per=86.44%, avg=27896.64, stdev=9708.46, samples=11 00:16:27.786 iops : min= 3260, max=10946, avg=6974.09, stdev=2427.05, samples=11 00:16:27.786 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:16:27.786 lat (msec) : 2=0.18%, 4=10.40%, 10=88.25%, 20=1.14% 00:16:27.786 cpu : usr=6.46%, sys=28.64%, ctx=8688, majf=0, minf=108 00:16:27.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:27.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.786 issued rwts: total=81920,42504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.786 00:16:27.786 Run status group 0 (all jobs): 00:16:27.786 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=320MiB (336MB), run=6000-6000msec 00:16:27.786 WRITE: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=166MiB (174MB), run=5268-5268msec 00:16:27.786 00:16:27.786 Disk stats (read/write): 00:16:27.786 nvme0n1: ios=81136/41427, merge=0/0, ticks=480744/199026, in_queue=679770, util=98.62% 00:16:27.786 06:46:41 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:27.786 06:46:41 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.786 06:46:41 -- common/autotest_common.sh@1208 -- # local i=0 00:16:27.786 06:46:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:27.786 06:46:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.786 06:46:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:27.786 06:46:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.786 06:46:41 -- common/autotest_common.sh@1220 -- # return 0 00:16:27.786 06:46:41 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.044 06:46:41 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:28.045 06:46:41 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:28.045 06:46:41 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:28.045 06:46:41 -- target/multipath.sh@144 -- # nvmftestfini 00:16:28.045 06:46:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:28.045 06:46:41 -- nvmf/common.sh@116 -- # sync 00:16:28.045 06:46:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:28.045 06:46:41 -- nvmf/common.sh@119 -- # set +e 00:16:28.045 06:46:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:28.045 06:46:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:28.045 rmmod nvme_tcp 00:16:28.045 rmmod nvme_fabrics 00:16:28.045 rmmod nvme_keyring 00:16:28.045 06:46:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:28.045 06:46:42 -- nvmf/common.sh@123 -- # set -e 00:16:28.045 06:46:42 -- nvmf/common.sh@124 -- # return 0 00:16:28.045 06:46:42 -- nvmf/common.sh@477 -- # '[' -n 75015 ']' 00:16:28.045 06:46:42 -- nvmf/common.sh@478 -- # killprocess 75015 00:16:28.045 06:46:42 -- common/autotest_common.sh@936 -- # '[' -z 75015 ']' 00:16:28.045 06:46:42 -- common/autotest_common.sh@940 -- # kill -0 75015 00:16:28.045 06:46:42 -- common/autotest_common.sh@941 -- # uname 00:16:28.303 06:46:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.303 06:46:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75015 00:16:28.303 killing process with pid 75015 00:16:28.303 06:46:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.303 06:46:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.303 06:46:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75015' 00:16:28.303 06:46:42 -- common/autotest_common.sh@955 -- # kill 75015 00:16:28.303 06:46:42 -- common/autotest_common.sh@960 -- # wait 75015 00:16:28.562 06:46:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:28.562 06:46:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:28.562 06:46:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:28.562 06:46:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.562 06:46:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:28.562 06:46:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.562 06:46:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.562 06:46:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.562 06:46:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:28.562 ************************************ 00:16:28.562 END TEST nvmf_multipath 00:16:28.562 ************************************ 00:16:28.562 00:16:28.562 real 0m20.717s 00:16:28.562 user 1m20.523s 00:16:28.562 sys 0m7.326s 00:16:28.562 06:46:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:28.562 06:46:42 -- common/autotest_common.sh@10 -- # set +x 00:16:28.562 06:46:42 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:28.562 06:46:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:28.562 06:46:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.562 06:46:42 -- common/autotest_common.sh@10 -- # set +x 00:16:28.562 ************************************ 00:16:28.562 START TEST nvmf_zcopy 00:16:28.562 ************************************ 00:16:28.562 06:46:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:28.821 * Looking for test storage... 00:16:28.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:28.821 06:46:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:28.821 06:46:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:28.821 06:46:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:28.821 06:46:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:28.821 06:46:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:28.821 06:46:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:28.821 06:46:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:28.821 06:46:42 -- scripts/common.sh@335 -- # IFS=.-: 00:16:28.821 06:46:42 -- scripts/common.sh@335 -- # read -ra ver1 00:16:28.821 06:46:42 -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.821 06:46:42 -- scripts/common.sh@336 -- # read -ra ver2 00:16:28.821 06:46:42 -- scripts/common.sh@337 -- # local 'op=<' 00:16:28.821 06:46:42 -- scripts/common.sh@339 -- # ver1_l=2 00:16:28.821 06:46:42 -- scripts/common.sh@340 -- # ver2_l=1 00:16:28.821 06:46:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:28.821 06:46:42 -- scripts/common.sh@343 -- # case "$op" in 00:16:28.821 06:46:42 -- scripts/common.sh@344 -- # : 1 00:16:28.821 06:46:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:28.821 06:46:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.821 06:46:42 -- scripts/common.sh@364 -- # decimal 1 00:16:28.821 06:46:42 -- scripts/common.sh@352 -- # local d=1 00:16:28.821 06:46:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.821 06:46:42 -- scripts/common.sh@354 -- # echo 1 00:16:28.821 06:46:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:28.822 06:46:42 -- scripts/common.sh@365 -- # decimal 2 00:16:28.822 06:46:42 -- scripts/common.sh@352 -- # local d=2 00:16:28.822 06:46:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.822 06:46:42 -- scripts/common.sh@354 -- # echo 2 00:16:28.822 06:46:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:28.822 06:46:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:28.822 06:46:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:28.822 06:46:42 -- scripts/common.sh@367 -- # return 0 00:16:28.822 06:46:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.822 06:46:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.822 --rc genhtml_branch_coverage=1 00:16:28.822 --rc genhtml_function_coverage=1 00:16:28.822 --rc genhtml_legend=1 00:16:28.822 --rc geninfo_all_blocks=1 00:16:28.822 --rc geninfo_unexecuted_blocks=1 00:16:28.822 00:16:28.822 ' 00:16:28.822 06:46:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.822 --rc genhtml_branch_coverage=1 00:16:28.822 --rc genhtml_function_coverage=1 00:16:28.822 --rc genhtml_legend=1 00:16:28.822 --rc geninfo_all_blocks=1 00:16:28.822 --rc geninfo_unexecuted_blocks=1 00:16:28.822 00:16:28.822 ' 00:16:28.822 06:46:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.822 --rc genhtml_branch_coverage=1 00:16:28.822 --rc genhtml_function_coverage=1 00:16:28.822 --rc genhtml_legend=1 00:16:28.822 --rc geninfo_all_blocks=1 00:16:28.822 --rc geninfo_unexecuted_blocks=1 00:16:28.822 00:16:28.822 ' 00:16:28.822 06:46:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.822 --rc genhtml_branch_coverage=1 00:16:28.822 --rc genhtml_function_coverage=1 00:16:28.822 --rc genhtml_legend=1 00:16:28.822 --rc geninfo_all_blocks=1 00:16:28.822 --rc geninfo_unexecuted_blocks=1 00:16:28.822 00:16:28.822 ' 00:16:28.822 06:46:42 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.822 06:46:42 -- nvmf/common.sh@7 -- # uname -s 00:16:28.822 06:46:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.822 06:46:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.822 06:46:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.822 06:46:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.822 06:46:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.822 06:46:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.822 06:46:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.822 06:46:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.822 06:46:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.822 06:46:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:28.822 06:46:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:28.822 06:46:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.822 06:46:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.822 06:46:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.822 06:46:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.822 06:46:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.822 06:46:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.822 06:46:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.822 06:46:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.822 06:46:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.822 06:46:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.822 06:46:42 -- paths/export.sh@5 -- # export PATH 00:16:28.822 06:46:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.822 06:46:42 -- nvmf/common.sh@46 -- # : 0 00:16:28.822 06:46:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:28.822 06:46:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:28.822 06:46:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:28.822 06:46:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.822 06:46:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.822 06:46:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:28.822 06:46:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:28.822 06:46:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:28.822 06:46:42 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:28.822 06:46:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:28.822 06:46:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.822 06:46:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:28.822 06:46:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:28.822 06:46:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:28.822 06:46:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.822 06:46:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.822 06:46:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.822 06:46:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:28.822 06:46:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:28.822 06:46:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.822 06:46:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.822 06:46:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.822 06:46:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:28.822 06:46:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.822 06:46:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.822 06:46:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.822 06:46:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.822 06:46:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.822 06:46:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.822 06:46:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.822 06:46:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.822 06:46:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:28.822 06:46:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:28.822 Cannot find device "nvmf_tgt_br" 00:16:28.822 06:46:42 -- nvmf/common.sh@154 -- # true 00:16:28.822 06:46:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.822 Cannot find device "nvmf_tgt_br2" 00:16:28.822 06:46:42 -- nvmf/common.sh@155 -- # true 00:16:28.822 06:46:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:28.822 06:46:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:28.822 Cannot find device "nvmf_tgt_br" 00:16:28.822 06:46:42 -- nvmf/common.sh@157 -- # true 00:16:28.822 06:46:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:28.822 Cannot find device "nvmf_tgt_br2" 00:16:28.822 06:46:42 -- nvmf/common.sh@158 -- # true 00:16:28.822 06:46:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:28.822 06:46:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:29.082 06:46:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.082 06:46:42 -- nvmf/common.sh@161 -- # true 00:16:29.082 06:46:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.082 06:46:42 -- nvmf/common.sh@162 -- # true 00:16:29.082 06:46:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.082 06:46:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.082 06:46:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.082 06:46:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.082 06:46:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.082 06:46:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.082 06:46:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.082 06:46:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.082 06:46:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:29.082 06:46:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:29.082 06:46:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:29.082 06:46:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:29.082 06:46:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:29.082 06:46:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.082 06:46:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.082 06:46:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.082 06:46:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:29.082 06:46:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:29.082 06:46:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.082 06:46:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.082 06:46:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.082 06:46:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.082 06:46:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:29.082 06:46:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:29.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:29.082 00:16:29.082 --- 10.0.0.2 ping statistics --- 00:16:29.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.082 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:29.082 06:46:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:29.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:29.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:29.082 00:16:29.082 --- 10.0.0.3 ping statistics --- 00:16:29.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.082 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:29.082 06:46:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:29.082 00:16:29.082 --- 10.0.0.1 ping statistics --- 00:16:29.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.082 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:29.082 06:46:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.082 06:46:43 -- nvmf/common.sh@421 -- # return 0 00:16:29.082 06:46:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:29.082 06:46:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.082 06:46:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:29.082 06:46:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:29.082 06:46:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.082 06:46:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:29.082 06:46:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:29.082 06:46:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:29.082 06:46:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:29.082 06:46:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.082 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:16:29.082 06:46:43 -- nvmf/common.sh@469 -- # nvmfpid=75619 00:16:29.082 06:46:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.082 06:46:43 -- nvmf/common.sh@470 -- # waitforlisten 75619 00:16:29.082 06:46:43 -- common/autotest_common.sh@829 -- # '[' -z 75619 ']' 00:16:29.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.082 06:46:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.082 06:46:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.082 06:46:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.082 06:46:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.082 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:16:29.341 [2024-12-14 06:46:43.105463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:29.341 [2024-12-14 06:46:43.105781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.341 [2024-12-14 06:46:43.246447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.341 [2024-12-14 06:46:43.329876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:29.341 [2024-12-14 06:46:43.330079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.341 [2024-12-14 06:46:43.330094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.341 [2024-12-14 06:46:43.330103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.341 [2024-12-14 06:46:43.330139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.277 06:46:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.277 06:46:43 -- common/autotest_common.sh@862 -- # return 0 00:16:30.277 06:46:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:30.277 06:46:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.277 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 06:46:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.277 06:46:43 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:30.277 06:46:43 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:30.277 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 [2024-12-14 06:46:44.001580] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:30.277 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.277 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 [2024-12-14 06:46:44.017702] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:30.277 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:30.277 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 malloc0 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.277 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.277 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.277 06:46:44 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:30.277 06:46:44 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:30.277 06:46:44 -- nvmf/common.sh@520 -- # config=() 00:16:30.277 06:46:44 -- nvmf/common.sh@520 -- # local subsystem config 00:16:30.277 06:46:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:30.277 06:46:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:30.277 { 00:16:30.277 "params": { 00:16:30.277 "name": "Nvme$subsystem", 00:16:30.277 "trtype": "$TEST_TRANSPORT", 00:16:30.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.277 "adrfam": "ipv4", 00:16:30.277 "trsvcid": "$NVMF_PORT", 00:16:30.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.277 "hdgst": ${hdgst:-false}, 00:16:30.277 "ddgst": ${ddgst:-false} 00:16:30.277 }, 00:16:30.277 "method": "bdev_nvme_attach_controller" 00:16:30.277 } 00:16:30.277 EOF 00:16:30.277 )") 00:16:30.277 06:46:44 -- nvmf/common.sh@542 -- # cat 00:16:30.277 06:46:44 -- nvmf/common.sh@544 -- # jq . 00:16:30.277 06:46:44 -- nvmf/common.sh@545 -- # IFS=, 00:16:30.277 06:46:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:30.277 "params": { 00:16:30.277 "name": "Nvme1", 00:16:30.277 "trtype": "tcp", 00:16:30.277 "traddr": "10.0.0.2", 00:16:30.277 "adrfam": "ipv4", 00:16:30.277 "trsvcid": "4420", 00:16:30.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.277 "hdgst": false, 00:16:30.277 "ddgst": false 00:16:30.277 }, 00:16:30.277 "method": "bdev_nvme_attach_controller" 00:16:30.277 }' 00:16:30.277 [2024-12-14 06:46:44.117387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:30.277 [2024-12-14 06:46:44.117664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75670 ] 00:16:30.277 [2024-12-14 06:46:44.254466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.536 [2024-12-14 06:46:44.343193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.794 Running I/O for 10 seconds... 00:16:40.770 00:16:40.770 Latency(us) 00:16:40.770 [2024-12-14T06:46:54.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.770 [2024-12-14T06:46:54.762Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:40.770 Verification LBA range: start 0x0 length 0x1000 00:16:40.770 Nvme1n1 : 10.01 11350.22 88.67 0.00 0.00 11249.37 1400.09 19184.17 00:16:40.770 [2024-12-14T06:46:54.762Z] =================================================================================================================== 00:16:40.770 [2024-12-14T06:46:54.762Z] Total : 11350.22 88.67 0.00 0.00 11249.37 1400.09 19184.17 00:16:41.030 06:46:54 -- target/zcopy.sh@39 -- # perfpid=75786 00:16:41.030 06:46:54 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:41.030 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:16:41.030 06:46:54 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:41.030 06:46:54 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:41.030 06:46:54 -- nvmf/common.sh@520 -- # config=() 00:16:41.030 06:46:54 -- nvmf/common.sh@520 -- # local subsystem config 00:16:41.030 06:46:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:41.030 06:46:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:41.030 { 00:16:41.030 "params": { 00:16:41.030 "name": "Nvme$subsystem", 00:16:41.030 "trtype": "$TEST_TRANSPORT", 00:16:41.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.030 "adrfam": "ipv4", 00:16:41.030 "trsvcid": "$NVMF_PORT", 00:16:41.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.030 "hdgst": ${hdgst:-false}, 00:16:41.030 "ddgst": ${ddgst:-false} 00:16:41.030 }, 00:16:41.030 "method": "bdev_nvme_attach_controller" 00:16:41.030 } 00:16:41.030 EOF 00:16:41.030 )") 00:16:41.030 06:46:54 -- nvmf/common.sh@542 -- # cat 00:16:41.030 [2024-12-14 06:46:54.861243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.861305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 06:46:54 -- nvmf/common.sh@544 -- # jq . 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 06:46:54 -- nvmf/common.sh@545 -- # IFS=, 00:16:41.030 06:46:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:41.030 "params": { 00:16:41.030 "name": "Nvme1", 00:16:41.030 "trtype": "tcp", 00:16:41.030 "traddr": "10.0.0.2", 00:16:41.030 "adrfam": "ipv4", 00:16:41.030 "trsvcid": "4420", 00:16:41.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.030 "hdgst": false, 00:16:41.030 "ddgst": false 00:16:41.030 }, 00:16:41.030 "method": "bdev_nvme_attach_controller" 00:16:41.030 }' 00:16:41.030 [2024-12-14 06:46:54.873186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.873416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.881184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.881395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.889181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.889351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.897184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.897332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.905185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.905376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.912583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.030 [2024-12-14 06:46:54.913006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75786 ] 00:16:41.030 [2024-12-14 06:46:54.913213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.913230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.921189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.921218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.933216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.933244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.945199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.945226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.957201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.957225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.969206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.969228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.981209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.981234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:54.993211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:54.993232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:55.005213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:55.005238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.030 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.030 [2024-12-14 06:46:55.017220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.030 [2024-12-14 06:46:55.017247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.029221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.029245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.041228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.041249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.053235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.053256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 [2024-12-14 06:46:55.054250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.065250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.065277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.077256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.077277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.293 [2024-12-14 06:46:55.089249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.293 [2024-12-14 06:46:55.089270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.293 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.101281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.101302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.113253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.113273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.125265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.125288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.137265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.137286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.149263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.149284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.161266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.161285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.171469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.294 [2024-12-14 06:46:55.173273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.173293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.185275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.185295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.197288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.197312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.209299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.209321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.221289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.221331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.233293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.233333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.245296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.245336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.257300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.257325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.269303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.269328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.294 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.294 [2024-12-14 06:46:55.281312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.294 [2024-12-14 06:46:55.281341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.293305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.293329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.305324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.305355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.317325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.317358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.329334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.329362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.341333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.341368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.353335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.353363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.365346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.365377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 Running I/O for 5 seconds... 00:16:41.555 [2024-12-14 06:46:55.377340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.377367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.394691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.394751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.410770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.410818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.428096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.428129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.444773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.444805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.461184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.461219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.477526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.477557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.494225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.494258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.510569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.510603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.522469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.522501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.555 [2024-12-14 06:46:55.537926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.555 [2024-12-14 06:46:55.537974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.555 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.554136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.554165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.570599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.570642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.586958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.586999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.604021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.604051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.619602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.619633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.631787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.631821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.647332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.647366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.663944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.663988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.680306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.680340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.696744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.696777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.712516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.712551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.728743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.728778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.745384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.745417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.761954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.761997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.778381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.778415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.815 [2024-12-14 06:46:55.794998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.815 [2024-12-14 06:46:55.795032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.815 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.811177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.811212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.827719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.827770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.843809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.843846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.855686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.855717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.871291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.871325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.888395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.888429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.904502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.904536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.921122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.921156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.937711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.937794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.954919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.954999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.971037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.971084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:55.988266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:55.988310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:56.003343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:56.003392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:56.015387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:56.015429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:56.030927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:56.030970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:56.041942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:56.041987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.075 [2024-12-14 06:46:56.058423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.075 [2024-12-14 06:46:56.058455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.075 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.334 [2024-12-14 06:46:56.074638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.334 [2024-12-14 06:46:56.074672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.334 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.334 [2024-12-14 06:46:56.091454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.334 [2024-12-14 06:46:56.091487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.334 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.334 [2024-12-14 06:46:56.107883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.107915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.123356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.123389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.139954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.139997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.156144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.156176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.173037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.173070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.189438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.189472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.206145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.206178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.223408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.223437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.240917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.240975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.256089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.256129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.273396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.273425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.289688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.289716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.306607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.306636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.335 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.335 [2024-12-14 06:46:56.323588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.335 [2024-12-14 06:46:56.323618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.340372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.340401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.356278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.356308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.373072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.373104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.389862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.389895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.406644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.406673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.422805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.422838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.439972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.440002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.456560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.456605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.472631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.472663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.489329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.489363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.505580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.505612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.522017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.522051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.594 [2024-12-14 06:46:56.538691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.594 [2024-12-14 06:46:56.538736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.594 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.595 [2024-12-14 06:46:56.554989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.595 [2024-12-14 06:46:56.555032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.595 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.595 [2024-12-14 06:46:56.571155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.595 [2024-12-14 06:46:56.571192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.595 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.587334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.587368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.598452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.598497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.615099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.615132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.632151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.632183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.646658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.646691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.661525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.661558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.678429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.678464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.694667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.694699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.710942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.711001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.727822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.727872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.744144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.744178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.760044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.760077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.776867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.776902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.793606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.793640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.809660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.809694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.825547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.825579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.854 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:42.854 [2024-12-14 06:46:56.841472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.854 [2024-12-14 06:46:56.841504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.858119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.858153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.873915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.873961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.890261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.890295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.907216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.907249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.924136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.924170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.940535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.940568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.956906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.956966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.973394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.973424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:56.984728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:56.984758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.001465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.001502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.017517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.017549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.034186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.034219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.050583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.050617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.066829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.066880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.083609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.083640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.114 [2024-12-14 06:46:57.099999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.114 [2024-12-14 06:46:57.100032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.114 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.115998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.116040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.132904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.132935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.149541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.149574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.166914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.166973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.183381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.183415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.199646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.199682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.215776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.215809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.232238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.232270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.248608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.248643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.264945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.264999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.280935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.280989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.292849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.292892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.308831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.308876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.325324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.325356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.341671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.341703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.376 [2024-12-14 06:46:57.358035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.376 [2024-12-14 06:46:57.358083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.376 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.375458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.375490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.392269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.392316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.408166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.408200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.425374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.425407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.442220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.442254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.459179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.459211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.475843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.475907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.491918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.491964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.502618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.502650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.519563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.519613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.530851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.530898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.539720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.539752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.550573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.550606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.672 [2024-12-14 06:46:57.559118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.672 [2024-12-14 06:46:57.559150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.672 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.567811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.567852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.576912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.576971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.586100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.586132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.594780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.594812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.603694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.603728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.612565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.612598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.621608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.621638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.636139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.636172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.673 [2024-12-14 06:46:57.647526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.673 [2024-12-14 06:46:57.647575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.673 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.664068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.664101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.679441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.679473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.693577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.693608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.708584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.708617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.723988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.724020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.739053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.739086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.932 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.932 [2024-12-14 06:46:57.750622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.932 [2024-12-14 06:46:57.750655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.766379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.766412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.782216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.782249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.798575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.798607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.808711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.808743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.824784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.824818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.839896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.839929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.854838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.854881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.870705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.870738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.882176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.882217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.897926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.897969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:43.933 [2024-12-14 06:46:57.913251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.933 [2024-12-14 06:46:57.913301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.933 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.925147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.925177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.940184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.940228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.951710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.951743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.959731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.959763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.972489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.972540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.983282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.983347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:57.993533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:57.993563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:58.003186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:58.003227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:58.013047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.192 [2024-12-14 06:46:58.013078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.192 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.192 [2024-12-14 06:46:58.022445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.022477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.037122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.037155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.053933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.053977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.070281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.070314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.081682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.081713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.089718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.089768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.100293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.100326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.108692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.108725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.117568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.117600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.126432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.126464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.135383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.135416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.144020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.144053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.152787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.152821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.162044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.162077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.170822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.170866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.193 [2024-12-14 06:46:58.179616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.193 [2024-12-14 06:46:58.179648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.193 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.188442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.188474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.196990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.197016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.205898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.205931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.219828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.219870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.234097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.234130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.248967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.249000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.265451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.265482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.276917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.276955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.292144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.292177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.308285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.308316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.319624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.319658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.335432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.335466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.351451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.351485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.362234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.362267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.370735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.370768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.379341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.379374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.391970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.392002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.408426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.408460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.419264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.419295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.453 [2024-12-14 06:46:58.434191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.453 [2024-12-14 06:46:58.434224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.453 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.712 [2024-12-14 06:46:58.444340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.712 [2024-12-14 06:46:58.444374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.712 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.712 [2024-12-14 06:46:58.452038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.712 [2024-12-14 06:46:58.452069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.712 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.712 [2024-12-14 06:46:58.463138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.712 [2024-12-14 06:46:58.463182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.712 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.471583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.471616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.482428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.482463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.491058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.491091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.499875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.499909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.508500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.508533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.517226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.517273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.525874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.525907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.535157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.535190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.545352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.545415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.561667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.561700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.578409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.578443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.588989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.589024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.597623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.597655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.611938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.611982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.627440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.627475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.639590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.639623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.655145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.655191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.671938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.671979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.688891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.688926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.713 [2024-12-14 06:46:58.699583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.713 [2024-12-14 06:46:58.699618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.713 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.972 [2024-12-14 06:46:58.715597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.972 [2024-12-14 06:46:58.715631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.972 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.972 [2024-12-14 06:46:58.731286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.972 [2024-12-14 06:46:58.731320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.972 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.972 [2024-12-14 06:46:58.747703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.972 [2024-12-14 06:46:58.747734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.972 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.972 [2024-12-14 06:46:58.758692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.972 [2024-12-14 06:46:58.758724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.972 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.972 [2024-12-14 06:46:58.774980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.972 [2024-12-14 06:46:58.775013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.790423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.790456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.798885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.798919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.807616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.807649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.816672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.816714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.825572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.825604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.834721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.834754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.843275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.843308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.851864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.851896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.860546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.860591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.869531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.869562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.878422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.878455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.887344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.887378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.896551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.896582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.905114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.905146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.913929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.913969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.923046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.923079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.932099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.932131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.941234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.941267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.949849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.949882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:44.973 [2024-12-14 06:46:58.958860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.973 [2024-12-14 06:46:58.958896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.973 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:58.968127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:58.968160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:58.977179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:58.977223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:58.985982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:58.986015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:58.995338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:58.995372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.008918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.008986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.024760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.024791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.041617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.041651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.058221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.058254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.075057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.075088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.086651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.086859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.232 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.232 [2024-12-14 06:46:59.097624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.232 [2024-12-14 06:46:59.097674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.113648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.113697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.124550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.124602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.133075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.133123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.143558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.143611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.152281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.152332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.163280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.163332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.172161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.172216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.181126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.181175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.190346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.190397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.199253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.199305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.208431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.208477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.233 [2024-12-14 06:46:59.217028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.233 [2024-12-14 06:46:59.217077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.233 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.226062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.226125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.234881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.234932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.243650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.243702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.252463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.252515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.261371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.261418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.492 [2024-12-14 06:46:59.270375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.492 [2024-12-14 06:46:59.270424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.492 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.279474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.279526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.288339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.288391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.297435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.297484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.306484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.306532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.315245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.315295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.323870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.323921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.332890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.332965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.342195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.342247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.351261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.351313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.360043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.360094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.369016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.369067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.378050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.378117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.387031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.387082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.395700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.395752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.404736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.404788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.413654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.413687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.427482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.427534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.442529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.442580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.453544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.453580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.461182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.461237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.493 [2024-12-14 06:46:59.476151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.493 [2024-12-14 06:46:59.476212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.493 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.491782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.491835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.502521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.502573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.510950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.511018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.521793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.521868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.533041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.533092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.540501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.540548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.551891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.551966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.560256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.560304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.569540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.569589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.578501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.578553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.587456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.587509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.596242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.596293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.605426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.605474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.614312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.614362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.622968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.623038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.632057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.632108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.641301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.641352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.650484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.650535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.664768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.664818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.753 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.753 [2024-12-14 06:46:59.676323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.753 [2024-12-14 06:46:59.676375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.754 [2024-12-14 06:46:59.691438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.754 [2024-12-14 06:46:59.691490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.754 [2024-12-14 06:46:59.702827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.754 [2024-12-14 06:46:59.702885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.754 [2024-12-14 06:46:59.718757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.754 [2024-12-14 06:46:59.718809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.754 [2024-12-14 06:46:59.728652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.754 [2024-12-14 06:46:59.728704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:45.754 [2024-12-14 06:46:59.738677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.754 [2024-12-14 06:46:59.738729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.754 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.748955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.749005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.760937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.760999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.776453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.776504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.793018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.793067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.804980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.805031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.820937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.821008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.832115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.832163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.847825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.847883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.858740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.858787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.867344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.867395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.880649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.880699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.888924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.888981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.900070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.900121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.013 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.013 [2024-12-14 06:46:59.908662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.013 [2024-12-14 06:46:59.908708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.922440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.922492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.938536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.938588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.954446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.954498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.966499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.966550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.981415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.981464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.014 [2024-12-14 06:46:59.997423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.014 [2024-12-14 06:46:59.997473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.014 2024/12/14 06:46:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.009167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.009218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.027925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.028174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.038968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.039107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.052287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.052437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.067819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.067950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.273 [2024-12-14 06:47:00.083456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.273 [2024-12-14 06:47:00.083578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.273 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.099824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.099880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.116257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.116310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.132835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.132871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.149442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.149492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.166034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.166102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.182233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.182283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.197967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.198019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.210518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.210571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.222462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.222513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.238572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.238625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.274 [2024-12-14 06:47:00.254022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.274 [2024-12-14 06:47:00.254073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.274 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.265086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.265120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.280504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.280556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.296274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.296326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.310853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.310905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.322030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.322083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.337448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.337501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.353677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.353729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.369587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.369638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.380413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.380449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 00:16:46.534 Latency(us) 00:16:46.534 [2024-12-14T06:47:00.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.534 [2024-12-14T06:47:00.526Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:46.534 Nvme1n1 : 5.01 14124.63 110.35 0.00 0.00 9051.78 3932.16 20733.21 00:16:46.534 [2024-12-14T06:47:00.526Z] =================================================================================================================== 00:16:46.534 [2024-12-14T06:47:00.526Z] Total : 14124.63 110.35 0.00 0.00 9051.78 3932.16 20733.21 00:16:46.534 [2024-12-14 06:47:00.390072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.390122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.402083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.402132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.414094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.414138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.426086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.426140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.438100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.438149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.450104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.450153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.462106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.462152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.474112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.474160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.486116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.486166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.498103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.498183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.510121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.510170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.534 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.534 [2024-12-14 06:47:00.522150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.534 [2024-12-14 06:47:00.522211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.534125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.534171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.546113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.546163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.558126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.558172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.570120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.570162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.582131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.582175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.594123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.594164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.606171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.606220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.618179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.618232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.793 [2024-12-14 06:47:00.630172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.793 [2024-12-14 06:47:00.630216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.793 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.642173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.642221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.654191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.654232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.666175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.666225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.678213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.678271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.690185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.690228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 [2024-12-14 06:47:00.702213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.794 [2024-12-14 06:47:00.702276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.794 2024/12/14 06:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:46.794 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75786) - No such process 00:16:46.794 06:47:00 -- target/zcopy.sh@49 -- # wait 75786 00:16:46.794 06:47:00 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.794 06:47:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.794 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:16:46.794 06:47:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.794 06:47:00 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:46.794 06:47:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.794 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:16:46.794 delay0 00:16:46.794 06:47:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.794 06:47:00 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:46.794 06:47:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.794 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:16:46.794 06:47:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.794 06:47:00 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:47.052 [2024-12-14 06:47:00.887755] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:53.618 Initializing NVMe Controllers 00:16:53.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:53.618 Initialization complete. Launching workers. 00:16:53.618 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:16:53.618 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:16:53.618 success 164, unsuccess 192, failed 0 00:16:53.618 06:47:06 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:53.618 06:47:06 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:53.618 06:47:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.618 06:47:06 -- nvmf/common.sh@116 -- # sync 00:16:53.618 06:47:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:53.618 06:47:07 -- nvmf/common.sh@119 -- # set +e 00:16:53.618 06:47:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.618 06:47:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:53.618 rmmod nvme_tcp 00:16:53.618 rmmod nvme_fabrics 00:16:53.618 rmmod nvme_keyring 00:16:53.618 06:47:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.618 06:47:07 -- nvmf/common.sh@123 -- # set -e 00:16:53.618 06:47:07 -- nvmf/common.sh@124 -- # return 0 00:16:53.618 06:47:07 -- nvmf/common.sh@477 -- # '[' -n 75619 ']' 00:16:53.618 06:47:07 -- nvmf/common.sh@478 -- # killprocess 75619 00:16:53.618 06:47:07 -- common/autotest_common.sh@936 -- # '[' -z 75619 ']' 00:16:53.618 06:47:07 -- common/autotest_common.sh@940 -- # kill -0 75619 00:16:53.618 06:47:07 -- common/autotest_common.sh@941 -- # uname 00:16:53.618 06:47:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.618 06:47:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75619 00:16:53.618 06:47:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.618 killing process with pid 75619 00:16:53.618 06:47:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.618 06:47:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75619' 00:16:53.618 06:47:07 -- common/autotest_common.sh@955 -- # kill 75619 00:16:53.618 06:47:07 -- common/autotest_common.sh@960 -- # wait 75619 00:16:53.618 06:47:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.618 06:47:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:53.618 06:47:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:53.618 06:47:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.618 06:47:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:53.618 06:47:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.618 06:47:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.618 06:47:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.619 06:47:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:53.619 00:16:53.619 real 0m24.978s 00:16:53.619 user 0m40.546s 00:16:53.619 sys 0m6.504s 00:16:53.619 06:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:53.619 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 ************************************ 00:16:53.619 END TEST nvmf_zcopy 00:16:53.619 ************************************ 00:16:53.619 06:47:07 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:53.619 06:47:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:53.619 06:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.619 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 ************************************ 00:16:53.619 START TEST nvmf_nmic 00:16:53.619 ************************************ 00:16:53.619 06:47:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:53.877 * Looking for test storage... 00:16:53.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:53.877 06:47:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:53.877 06:47:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:53.877 06:47:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:53.877 06:47:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:53.877 06:47:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:53.877 06:47:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:53.877 06:47:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:53.877 06:47:07 -- scripts/common.sh@335 -- # IFS=.-: 00:16:53.877 06:47:07 -- scripts/common.sh@335 -- # read -ra ver1 00:16:53.877 06:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.877 06:47:07 -- scripts/common.sh@336 -- # read -ra ver2 00:16:53.877 06:47:07 -- scripts/common.sh@337 -- # local 'op=<' 00:16:53.877 06:47:07 -- scripts/common.sh@339 -- # ver1_l=2 00:16:53.877 06:47:07 -- scripts/common.sh@340 -- # ver2_l=1 00:16:53.877 06:47:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:53.877 06:47:07 -- scripts/common.sh@343 -- # case "$op" in 00:16:53.877 06:47:07 -- scripts/common.sh@344 -- # : 1 00:16:53.877 06:47:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:53.877 06:47:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.877 06:47:07 -- scripts/common.sh@364 -- # decimal 1 00:16:53.877 06:47:07 -- scripts/common.sh@352 -- # local d=1 00:16:53.877 06:47:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.877 06:47:07 -- scripts/common.sh@354 -- # echo 1 00:16:53.877 06:47:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:53.877 06:47:07 -- scripts/common.sh@365 -- # decimal 2 00:16:53.877 06:47:07 -- scripts/common.sh@352 -- # local d=2 00:16:53.877 06:47:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.877 06:47:07 -- scripts/common.sh@354 -- # echo 2 00:16:53.877 06:47:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:53.877 06:47:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:53.877 06:47:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:53.877 06:47:07 -- scripts/common.sh@367 -- # return 0 00:16:53.877 06:47:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.877 06:47:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:53.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.877 --rc genhtml_branch_coverage=1 00:16:53.877 --rc genhtml_function_coverage=1 00:16:53.877 --rc genhtml_legend=1 00:16:53.877 --rc geninfo_all_blocks=1 00:16:53.877 --rc geninfo_unexecuted_blocks=1 00:16:53.877 00:16:53.877 ' 00:16:53.877 06:47:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:53.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.877 --rc genhtml_branch_coverage=1 00:16:53.877 --rc genhtml_function_coverage=1 00:16:53.877 --rc genhtml_legend=1 00:16:53.877 --rc geninfo_all_blocks=1 00:16:53.877 --rc geninfo_unexecuted_blocks=1 00:16:53.877 00:16:53.877 ' 00:16:53.877 06:47:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:53.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.877 --rc genhtml_branch_coverage=1 00:16:53.878 --rc genhtml_function_coverage=1 00:16:53.878 --rc genhtml_legend=1 00:16:53.878 --rc geninfo_all_blocks=1 00:16:53.878 --rc geninfo_unexecuted_blocks=1 00:16:53.878 00:16:53.878 ' 00:16:53.878 06:47:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.878 --rc genhtml_branch_coverage=1 00:16:53.878 --rc genhtml_function_coverage=1 00:16:53.878 --rc genhtml_legend=1 00:16:53.878 --rc geninfo_all_blocks=1 00:16:53.878 --rc geninfo_unexecuted_blocks=1 00:16:53.878 00:16:53.878 ' 00:16:53.878 06:47:07 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.878 06:47:07 -- nvmf/common.sh@7 -- # uname -s 00:16:53.878 06:47:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.878 06:47:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.878 06:47:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.878 06:47:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.878 06:47:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.878 06:47:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.878 06:47:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.878 06:47:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.878 06:47:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.878 06:47:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:53.878 06:47:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:16:53.878 06:47:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.878 06:47:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.878 06:47:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.878 06:47:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.878 06:47:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.878 06:47:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.878 06:47:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.878 06:47:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.878 06:47:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.878 06:47:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.878 06:47:07 -- paths/export.sh@5 -- # export PATH 00:16:53.878 06:47:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.878 06:47:07 -- nvmf/common.sh@46 -- # : 0 00:16:53.878 06:47:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.878 06:47:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.878 06:47:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.878 06:47:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.878 06:47:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.878 06:47:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.878 06:47:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.878 06:47:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.878 06:47:07 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.878 06:47:07 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.878 06:47:07 -- target/nmic.sh@14 -- # nvmftestinit 00:16:53.878 06:47:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.878 06:47:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.878 06:47:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.878 06:47:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.878 06:47:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.878 06:47:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.878 06:47:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.878 06:47:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.878 06:47:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:53.878 06:47:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:53.878 06:47:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.878 06:47:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.878 06:47:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.878 06:47:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:53.878 06:47:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.878 06:47:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.878 06:47:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.878 06:47:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.878 06:47:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.878 06:47:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.878 06:47:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.878 06:47:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.878 06:47:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:53.878 06:47:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:53.878 Cannot find device "nvmf_tgt_br" 00:16:53.878 06:47:07 -- nvmf/common.sh@154 -- # true 00:16:53.878 06:47:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.878 Cannot find device "nvmf_tgt_br2" 00:16:53.878 06:47:07 -- nvmf/common.sh@155 -- # true 00:16:53.878 06:47:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:53.878 06:47:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:53.878 Cannot find device "nvmf_tgt_br" 00:16:53.878 06:47:07 -- nvmf/common.sh@157 -- # true 00:16:53.878 06:47:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:53.878 Cannot find device "nvmf_tgt_br2" 00:16:53.878 06:47:07 -- nvmf/common.sh@158 -- # true 00:16:53.878 06:47:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:54.137 06:47:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:54.137 06:47:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.137 06:47:07 -- nvmf/common.sh@161 -- # true 00:16:54.137 06:47:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.137 06:47:07 -- nvmf/common.sh@162 -- # true 00:16:54.137 06:47:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.137 06:47:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.137 06:47:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.137 06:47:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.137 06:47:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.137 06:47:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.137 06:47:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.137 06:47:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:54.137 06:47:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:54.137 06:47:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:54.137 06:47:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:54.137 06:47:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:54.137 06:47:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:54.137 06:47:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.137 06:47:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.137 06:47:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.137 06:47:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:54.137 06:47:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:54.137 06:47:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.137 06:47:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.137 06:47:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.137 06:47:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.137 06:47:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.137 06:47:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:54.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:54.137 00:16:54.137 --- 10.0.0.2 ping statistics --- 00:16:54.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.137 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:54.137 06:47:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:54.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:54.137 00:16:54.137 --- 10.0.0.3 ping statistics --- 00:16:54.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.137 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:54.137 06:47:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:54.137 00:16:54.137 --- 10.0.0.1 ping statistics --- 00:16:54.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.137 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:54.137 06:47:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.137 06:47:08 -- nvmf/common.sh@421 -- # return 0 00:16:54.137 06:47:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:54.137 06:47:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.137 06:47:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:54.137 06:47:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:54.137 06:47:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.137 06:47:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:54.137 06:47:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:54.137 06:47:08 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:54.137 06:47:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:54.137 06:47:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.137 06:47:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.137 06:47:08 -- nvmf/common.sh@469 -- # nvmfpid=76125 00:16:54.137 06:47:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.137 06:47:08 -- nvmf/common.sh@470 -- # waitforlisten 76125 00:16:54.137 06:47:08 -- common/autotest_common.sh@829 -- # '[' -z 76125 ']' 00:16:54.138 06:47:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.138 06:47:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.138 06:47:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.138 06:47:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.138 06:47:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.396 [2024-12-14 06:47:08.179635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:54.396 [2024-12-14 06:47:08.179729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.396 [2024-12-14 06:47:08.320938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.655 [2024-12-14 06:47:08.447889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.655 [2024-12-14 06:47:08.448102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.655 [2024-12-14 06:47:08.448122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.655 [2024-12-14 06:47:08.448134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.655 [2024-12-14 06:47:08.448275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.655 [2024-12-14 06:47:08.448420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.655 [2024-12-14 06:47:08.448991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.655 [2024-12-14 06:47:08.449002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.223 06:47:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.223 06:47:09 -- common/autotest_common.sh@862 -- # return 0 00:16:55.223 06:47:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.223 06:47:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.223 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 06:47:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.482 06:47:09 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 [2024-12-14 06:47:09.249874] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 Malloc0 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 [2024-12-14 06:47:09.321165] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.482 test case1: single bdev can't be used in multiple subsystems 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:55.482 06:47:09 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.482 06:47:09 -- target/nmic.sh@28 -- # nmic_status=0 00:16:55.482 06:47:09 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:55.482 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.482 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.482 [2024-12-14 06:47:09.344953] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:55.482 [2024-12-14 06:47:09.345024] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:55.482 [2024-12-14 06:47:09.345035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.482 2024/12/14 06:47:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:55.482 request: 00:16:55.482 { 00:16:55.482 "method": "nvmf_subsystem_add_ns", 00:16:55.482 "params": { 00:16:55.482 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.482 "namespace": { 00:16:55.482 "bdev_name": "Malloc0" 00:16:55.482 } 00:16:55.482 } 00:16:55.482 } 00:16:55.482 Got JSON-RPC error response 00:16:55.482 GoRPCClient: error on JSON-RPC call 00:16:55.482 06:47:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:55.483 06:47:09 -- target/nmic.sh@29 -- # nmic_status=1 00:16:55.483 06:47:09 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:55.483 Adding namespace failed - expected result. 00:16:55.483 06:47:09 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:55.483 06:47:09 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:55.483 test case2: host connect to nvmf target in multiple paths 00:16:55.483 06:47:09 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:55.483 06:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.483 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.483 [2024-12-14 06:47:09.357101] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:55.483 06:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.483 06:47:09 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.741 06:47:09 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:55.741 06:47:09 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.741 06:47:09 -- common/autotest_common.sh@1187 -- # local i=0 00:16:55.741 06:47:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.741 06:47:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:55.741 06:47:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:58.276 06:47:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:58.276 06:47:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:58.276 06:47:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.276 06:47:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:58.276 06:47:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.276 06:47:11 -- common/autotest_common.sh@1197 -- # return 0 00:16:58.276 06:47:11 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:58.276 [global] 00:16:58.276 thread=1 00:16:58.276 invalidate=1 00:16:58.276 rw=write 00:16:58.276 time_based=1 00:16:58.276 runtime=1 00:16:58.276 ioengine=libaio 00:16:58.276 direct=1 00:16:58.276 bs=4096 00:16:58.276 iodepth=1 00:16:58.276 norandommap=0 00:16:58.276 numjobs=1 00:16:58.276 00:16:58.276 verify_dump=1 00:16:58.276 verify_backlog=512 00:16:58.276 verify_state_save=0 00:16:58.276 do_verify=1 00:16:58.276 verify=crc32c-intel 00:16:58.276 [job0] 00:16:58.276 filename=/dev/nvme0n1 00:16:58.276 Could not set queue depth (nvme0n1) 00:16:58.276 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.276 fio-3.35 00:16:58.276 Starting 1 thread 00:16:59.213 00:16:59.213 job0: (groupid=0, jobs=1): err= 0: pid=76235: Sat Dec 14 06:47:13 2024 00:16:59.213 read: IOPS=3335, BW=13.0MiB/s (13.7MB/s)(13.0MiB/1001msec) 00:16:59.213 slat (nsec): min=12646, max=66440, avg=15960.71, stdev=4298.14 00:16:59.213 clat (usec): min=113, max=7143, avg=142.72, stdev=151.26 00:16:59.213 lat (usec): min=126, max=7156, avg=158.69, stdev=151.58 00:16:59.213 clat percentiles (usec): 00:16:59.213 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 126], 00:16:59.213 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:16:59.213 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 167], 00:16:59.213 | 99.00th=[ 210], 99.50th=[ 265], 99.90th=[ 1926], 99.95th=[ 3490], 00:16:59.213 | 99.99th=[ 7111] 00:16:59.213 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:59.213 slat (usec): min=19, max=136, avg=24.35, stdev= 6.09 00:16:59.213 clat (usec): min=81, max=1743, avg=103.39, stdev=37.01 00:16:59.213 lat (usec): min=102, max=1765, avg=127.74, stdev=37.98 00:16:59.213 clat percentiles (usec): 00:16:59.213 | 1.00th=[ 87], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 92], 00:16:59.213 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 99], 00:16:59.213 | 70.00th=[ 103], 80.00th=[ 112], 90.00th=[ 123], 95.00th=[ 135], 00:16:59.213 | 99.00th=[ 165], 99.50th=[ 200], 99.90th=[ 482], 99.95th=[ 922], 00:16:59.213 | 99.99th=[ 1745] 00:16:59.213 bw ( KiB/s): min=15560, max=15560, per=100.00%, avg=15560.00, stdev= 0.00, samples=1 00:16:59.213 iops : min= 3890, max= 3890, avg=3890.00, stdev= 0.00, samples=1 00:16:59.213 lat (usec) : 100=32.64%, 250=66.99%, 500=0.22%, 750=0.01%, 1000=0.03% 00:16:59.213 lat (msec) : 2=0.06%, 4=0.03%, 10=0.01% 00:16:59.213 cpu : usr=3.20%, sys=9.90%, ctx=6926, majf=0, minf=5 00:16:59.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.213 issued rwts: total=3339,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.213 00:16:59.213 Run status group 0 (all jobs): 00:16:59.213 READ: bw=13.0MiB/s (13.7MB/s), 13.0MiB/s-13.0MiB/s (13.7MB/s-13.7MB/s), io=13.0MiB (13.7MB), run=1001-1001msec 00:16:59.213 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:59.213 00:16:59.213 Disk stats (read/write): 00:16:59.213 nvme0n1: ios=3122/3077, merge=0/0, ticks=476/364, in_queue=840, util=90.48% 00:16:59.213 06:47:13 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:59.213 06:47:13 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.213 06:47:13 -- common/autotest_common.sh@1208 -- # local i=0 00:16:59.213 06:47:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:59.213 06:47:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.213 06:47:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:59.213 06:47:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.213 06:47:13 -- common/autotest_common.sh@1220 -- # return 0 00:16:59.213 06:47:13 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:59.213 06:47:13 -- target/nmic.sh@53 -- # nvmftestfini 00:16:59.213 06:47:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.213 06:47:13 -- nvmf/common.sh@116 -- # sync 00:16:59.472 06:47:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.472 06:47:13 -- nvmf/common.sh@119 -- # set +e 00:16:59.472 06:47:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.472 06:47:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.472 rmmod nvme_tcp 00:16:59.472 rmmod nvme_fabrics 00:16:59.472 rmmod nvme_keyring 00:16:59.472 06:47:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.472 06:47:13 -- nvmf/common.sh@123 -- # set -e 00:16:59.472 06:47:13 -- nvmf/common.sh@124 -- # return 0 00:16:59.472 06:47:13 -- nvmf/common.sh@477 -- # '[' -n 76125 ']' 00:16:59.472 06:47:13 -- nvmf/common.sh@478 -- # killprocess 76125 00:16:59.473 06:47:13 -- common/autotest_common.sh@936 -- # '[' -z 76125 ']' 00:16:59.473 06:47:13 -- common/autotest_common.sh@940 -- # kill -0 76125 00:16:59.473 06:47:13 -- common/autotest_common.sh@941 -- # uname 00:16:59.473 06:47:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.473 06:47:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76125 00:16:59.473 06:47:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.473 06:47:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.473 killing process with pid 76125 00:16:59.473 06:47:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76125' 00:16:59.473 06:47:13 -- common/autotest_common.sh@955 -- # kill 76125 00:16:59.473 06:47:13 -- common/autotest_common.sh@960 -- # wait 76125 00:16:59.732 06:47:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.732 06:47:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.732 06:47:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.732 06:47:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.732 06:47:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.732 06:47:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.732 06:47:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.732 06:47:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.732 06:47:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:59.732 00:16:59.732 real 0m6.172s 00:16:59.732 user 0m20.391s 00:16:59.732 sys 0m1.515s 00:16:59.991 ************************************ 00:16:59.991 END TEST nvmf_nmic 00:16:59.991 ************************************ 00:16:59.991 06:47:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:59.991 06:47:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.991 06:47:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:59.991 06:47:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:59.991 06:47:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.991 06:47:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.991 ************************************ 00:16:59.991 START TEST nvmf_fio_target 00:16:59.991 ************************************ 00:16:59.991 06:47:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:59.991 * Looking for test storage... 00:16:59.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:59.991 06:47:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:59.991 06:47:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:59.991 06:47:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:59.991 06:47:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:59.991 06:47:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:59.991 06:47:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:59.991 06:47:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:59.991 06:47:13 -- scripts/common.sh@335 -- # IFS=.-: 00:16:59.991 06:47:13 -- scripts/common.sh@335 -- # read -ra ver1 00:16:59.991 06:47:13 -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.991 06:47:13 -- scripts/common.sh@336 -- # read -ra ver2 00:16:59.991 06:47:13 -- scripts/common.sh@337 -- # local 'op=<' 00:16:59.991 06:47:13 -- scripts/common.sh@339 -- # ver1_l=2 00:16:59.991 06:47:13 -- scripts/common.sh@340 -- # ver2_l=1 00:16:59.991 06:47:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:59.991 06:47:13 -- scripts/common.sh@343 -- # case "$op" in 00:16:59.991 06:47:13 -- scripts/common.sh@344 -- # : 1 00:16:59.991 06:47:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:59.991 06:47:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.991 06:47:13 -- scripts/common.sh@364 -- # decimal 1 00:16:59.991 06:47:13 -- scripts/common.sh@352 -- # local d=1 00:16:59.991 06:47:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.991 06:47:13 -- scripts/common.sh@354 -- # echo 1 00:16:59.991 06:47:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:59.991 06:47:13 -- scripts/common.sh@365 -- # decimal 2 00:16:59.991 06:47:13 -- scripts/common.sh@352 -- # local d=2 00:16:59.991 06:47:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.991 06:47:13 -- scripts/common.sh@354 -- # echo 2 00:16:59.991 06:47:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:59.991 06:47:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:59.991 06:47:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:59.991 06:47:13 -- scripts/common.sh@367 -- # return 0 00:16:59.991 06:47:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.991 06:47:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:59.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.992 --rc genhtml_branch_coverage=1 00:16:59.992 --rc genhtml_function_coverage=1 00:16:59.992 --rc genhtml_legend=1 00:16:59.992 --rc geninfo_all_blocks=1 00:16:59.992 --rc geninfo_unexecuted_blocks=1 00:16:59.992 00:16:59.992 ' 00:16:59.992 06:47:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:59.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.992 --rc genhtml_branch_coverage=1 00:16:59.992 --rc genhtml_function_coverage=1 00:16:59.992 --rc genhtml_legend=1 00:16:59.992 --rc geninfo_all_blocks=1 00:16:59.992 --rc geninfo_unexecuted_blocks=1 00:16:59.992 00:16:59.992 ' 00:16:59.992 06:47:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:59.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.992 --rc genhtml_branch_coverage=1 00:16:59.992 --rc genhtml_function_coverage=1 00:16:59.992 --rc genhtml_legend=1 00:16:59.992 --rc geninfo_all_blocks=1 00:16:59.992 --rc geninfo_unexecuted_blocks=1 00:16:59.992 00:16:59.992 ' 00:16:59.992 06:47:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:59.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.992 --rc genhtml_branch_coverage=1 00:16:59.992 --rc genhtml_function_coverage=1 00:16:59.992 --rc genhtml_legend=1 00:16:59.992 --rc geninfo_all_blocks=1 00:16:59.992 --rc geninfo_unexecuted_blocks=1 00:16:59.992 00:16:59.992 ' 00:16:59.992 06:47:13 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.992 06:47:13 -- nvmf/common.sh@7 -- # uname -s 00:16:59.992 06:47:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.992 06:47:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.992 06:47:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.992 06:47:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.992 06:47:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.992 06:47:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.992 06:47:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.992 06:47:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.992 06:47:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.992 06:47:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.992 06:47:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:00.251 06:47:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:00.251 06:47:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.251 06:47:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.251 06:47:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.251 06:47:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.251 06:47:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.251 06:47:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.251 06:47:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.251 06:47:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.251 06:47:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.252 06:47:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.252 06:47:13 -- paths/export.sh@5 -- # export PATH 00:17:00.252 06:47:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.252 06:47:13 -- nvmf/common.sh@46 -- # : 0 00:17:00.252 06:47:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:00.252 06:47:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:00.252 06:47:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:00.252 06:47:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.252 06:47:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.252 06:47:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:00.252 06:47:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:00.252 06:47:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:00.252 06:47:13 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.252 06:47:13 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.252 06:47:13 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.252 06:47:13 -- target/fio.sh@16 -- # nvmftestinit 00:17:00.252 06:47:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:00.252 06:47:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.252 06:47:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:00.252 06:47:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:00.252 06:47:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:00.252 06:47:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.252 06:47:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.252 06:47:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.252 06:47:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:00.252 06:47:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:00.252 06:47:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:00.252 06:47:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:00.252 06:47:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:00.252 06:47:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:00.252 06:47:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.252 06:47:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.252 06:47:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:00.252 06:47:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:00.252 06:47:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.252 06:47:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.252 06:47:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.252 06:47:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.252 06:47:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.252 06:47:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.252 06:47:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.252 06:47:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.252 06:47:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:00.252 06:47:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:00.252 Cannot find device "nvmf_tgt_br" 00:17:00.252 06:47:14 -- nvmf/common.sh@154 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.252 Cannot find device "nvmf_tgt_br2" 00:17:00.252 06:47:14 -- nvmf/common.sh@155 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:00.252 06:47:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:00.252 Cannot find device "nvmf_tgt_br" 00:17:00.252 06:47:14 -- nvmf/common.sh@157 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:00.252 Cannot find device "nvmf_tgt_br2" 00:17:00.252 06:47:14 -- nvmf/common.sh@158 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:00.252 06:47:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:00.252 06:47:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.252 06:47:14 -- nvmf/common.sh@161 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.252 06:47:14 -- nvmf/common.sh@162 -- # true 00:17:00.252 06:47:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:00.252 06:47:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:00.252 06:47:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:00.252 06:47:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:00.252 06:47:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:00.252 06:47:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.252 06:47:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.252 06:47:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:00.252 06:47:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:00.252 06:47:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:00.252 06:47:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:00.252 06:47:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:00.252 06:47:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:00.252 06:47:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.252 06:47:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.252 06:47:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.252 06:47:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:00.511 06:47:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:00.511 06:47:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.511 06:47:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.511 06:47:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.511 06:47:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.511 06:47:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.511 06:47:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:00.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:00.511 00:17:00.511 --- 10.0.0.2 ping statistics --- 00:17:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.511 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:00.511 06:47:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:00.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:17:00.511 00:17:00.511 --- 10.0.0.3 ping statistics --- 00:17:00.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.512 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:00.512 06:47:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:00.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:00.512 00:17:00.512 --- 10.0.0.1 ping statistics --- 00:17:00.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.512 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:00.512 06:47:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.512 06:47:14 -- nvmf/common.sh@421 -- # return 0 00:17:00.512 06:47:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:00.512 06:47:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.512 06:47:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:00.512 06:47:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:00.512 06:47:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.512 06:47:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:00.512 06:47:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:00.512 06:47:14 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:00.512 06:47:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:00.512 06:47:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.512 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:17:00.512 06:47:14 -- nvmf/common.sh@469 -- # nvmfpid=76426 00:17:00.512 06:47:14 -- nvmf/common.sh@470 -- # waitforlisten 76426 00:17:00.512 06:47:14 -- common/autotest_common.sh@829 -- # '[' -z 76426 ']' 00:17:00.512 06:47:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.512 06:47:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.512 06:47:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.512 06:47:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.512 06:47:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.512 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:17:00.512 [2024-12-14 06:47:14.383626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:00.512 [2024-12-14 06:47:14.384297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.771 [2024-12-14 06:47:14.521313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.771 [2024-12-14 06:47:14.632101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.771 [2024-12-14 06:47:14.632287] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.771 [2024-12-14 06:47:14.632299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.771 [2024-12-14 06:47:14.632307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.771 [2024-12-14 06:47:14.632831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.771 [2024-12-14 06:47:14.632987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.771 [2024-12-14 06:47:14.633192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.771 [2024-12-14 06:47:14.633195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.706 06:47:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.706 06:47:15 -- common/autotest_common.sh@862 -- # return 0 00:17:01.706 06:47:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:01.706 06:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.706 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:17:01.706 06:47:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.706 06:47:15 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:01.706 [2024-12-14 06:47:15.625861] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.706 06:47:15 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.274 06:47:15 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:02.274 06:47:15 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.532 06:47:16 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:02.532 06:47:16 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.791 06:47:16 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:02.791 06:47:16 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.050 06:47:16 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:03.050 06:47:16 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:03.309 06:47:17 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.567 06:47:17 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:03.567 06:47:17 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.826 06:47:17 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:03.826 06:47:17 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:04.085 06:47:17 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:04.085 06:47:17 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:04.343 06:47:18 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:04.343 06:47:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:04.343 06:47:18 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.601 06:47:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:04.601 06:47:18 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.861 06:47:18 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.120 [2024-12-14 06:47:18.990266] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.120 06:47:19 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:05.403 06:47:19 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:05.662 06:47:19 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.662 06:47:19 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:05.662 06:47:19 -- common/autotest_common.sh@1187 -- # local i=0 00:17:05.662 06:47:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.662 06:47:19 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:17:05.662 06:47:19 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:17:05.662 06:47:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:08.195 06:47:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:08.195 06:47:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.195 06:47:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:08.195 06:47:21 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:17:08.195 06:47:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.195 06:47:21 -- common/autotest_common.sh@1197 -- # return 0 00:17:08.195 06:47:21 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:08.195 [global] 00:17:08.195 thread=1 00:17:08.195 invalidate=1 00:17:08.195 rw=write 00:17:08.195 time_based=1 00:17:08.195 runtime=1 00:17:08.195 ioengine=libaio 00:17:08.195 direct=1 00:17:08.195 bs=4096 00:17:08.195 iodepth=1 00:17:08.195 norandommap=0 00:17:08.195 numjobs=1 00:17:08.195 00:17:08.195 verify_dump=1 00:17:08.195 verify_backlog=512 00:17:08.195 verify_state_save=0 00:17:08.195 do_verify=1 00:17:08.195 verify=crc32c-intel 00:17:08.195 [job0] 00:17:08.195 filename=/dev/nvme0n1 00:17:08.195 [job1] 00:17:08.195 filename=/dev/nvme0n2 00:17:08.195 [job2] 00:17:08.195 filename=/dev/nvme0n3 00:17:08.195 [job3] 00:17:08.195 filename=/dev/nvme0n4 00:17:08.195 Could not set queue depth (nvme0n1) 00:17:08.195 Could not set queue depth (nvme0n2) 00:17:08.196 Could not set queue depth (nvme0n3) 00:17:08.196 Could not set queue depth (nvme0n4) 00:17:08.196 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.196 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.196 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.196 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.196 fio-3.35 00:17:08.196 Starting 4 threads 00:17:09.131 00:17:09.131 job0: (groupid=0, jobs=1): err= 0: pid=76717: Sat Dec 14 06:47:23 2024 00:17:09.131 read: IOPS=1885, BW=7540KiB/s (7721kB/s)(7548KiB/1001msec) 00:17:09.131 slat (nsec): min=16903, max=56702, avg=20351.06, stdev=3757.06 00:17:09.131 clat (usec): min=129, max=3144, avg=261.34, stdev=75.34 00:17:09.131 lat (usec): min=148, max=3164, avg=281.69, stdev=75.46 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:17:09.131 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 258], 00:17:09.131 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 330], 00:17:09.131 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 840], 99.95th=[ 3130], 00:17:09.131 | 99.99th=[ 3130] 00:17:09.131 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:09.131 slat (usec): min=24, max=216, avg=30.24, stdev= 7.73 00:17:09.131 clat (usec): min=114, max=282, avg=194.37, stdev=20.38 00:17:09.131 lat (usec): min=146, max=380, avg=224.61, stdev=20.85 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:17:09.131 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:17:09.131 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:17:09.131 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 277], 99.95th=[ 281], 00:17:09.131 | 99.99th=[ 281] 00:17:09.131 bw ( KiB/s): min= 8192, max= 8192, per=20.37%, avg=8192.00, stdev= 0.00, samples=1 00:17:09.131 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:09.131 lat (usec) : 250=73.09%, 500=26.81%, 750=0.05%, 1000=0.03% 00:17:09.131 lat (msec) : 4=0.03% 00:17:09.131 cpu : usr=1.60%, sys=7.80%, ctx=3937, majf=0, minf=9 00:17:09.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 issued rwts: total=1887,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.131 job1: (groupid=0, jobs=1): err= 0: pid=76718: Sat Dec 14 06:47:23 2024 00:17:09.131 read: IOPS=1996, BW=7984KiB/s (8176kB/s)(7992KiB/1001msec) 00:17:09.131 slat (nsec): min=12966, max=50437, avg=15906.80, stdev=3589.71 00:17:09.131 clat (usec): min=126, max=2046, avg=256.28, stdev=48.35 00:17:09.131 lat (usec): min=142, max=2063, avg=272.19, stdev=48.41 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 165], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:17:09.131 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:17:09.131 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:17:09.131 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 750], 99.95th=[ 2040], 00:17:09.131 | 99.99th=[ 2040] 00:17:09.131 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:09.131 slat (nsec): min=18827, max=83522, avg=23092.80, stdev=5350.33 00:17:09.131 clat (usec): min=92, max=741, avg=196.60, stdev=26.85 00:17:09.131 lat (usec): min=112, max=766, avg=219.69, stdev=27.46 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 115], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:17:09.131 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:17:09.131 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:17:09.131 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 302], 00:17:09.131 | 99.99th=[ 742] 00:17:09.131 bw ( KiB/s): min= 8192, max= 8192, per=20.37%, avg=8192.00, stdev= 0.00, samples=2 00:17:09.131 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:17:09.131 lat (usec) : 100=0.07%, 250=70.76%, 500=29.09%, 750=0.02%, 1000=0.02% 00:17:09.131 lat (msec) : 4=0.02% 00:17:09.131 cpu : usr=1.40%, sys=6.10%, ctx=4047, majf=0, minf=9 00:17:09.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 issued rwts: total=1998,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.131 job2: (groupid=0, jobs=1): err= 0: pid=76719: Sat Dec 14 06:47:23 2024 00:17:09.131 read: IOPS=2713, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:17:09.131 slat (nsec): min=13765, max=79645, avg=16793.98, stdev=4981.91 00:17:09.131 clat (usec): min=139, max=244, avg=169.17, stdev=15.10 00:17:09.131 lat (usec): min=154, max=267, avg=185.96, stdev=16.22 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:17:09.131 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:17:09.131 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:17:09.131 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 239], 99.95th=[ 245], 00:17:09.131 | 99.99th=[ 245] 00:17:09.131 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:17:09.131 slat (usec): min=19, max=104, avg=24.16, stdev= 6.35 00:17:09.131 clat (usec): min=101, max=206, avg=133.95, stdev=14.12 00:17:09.131 lat (usec): min=124, max=280, avg=158.12, stdev=15.89 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:17:09.131 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:17:09.131 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 161], 00:17:09.131 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 202], 99.95th=[ 204], 00:17:09.131 | 99.99th=[ 206] 00:17:09.131 bw ( KiB/s): min=12288, max=12288, per=30.56%, avg=12288.00, stdev= 0.00, samples=1 00:17:09.131 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:09.131 lat (usec) : 250=100.00% 00:17:09.131 cpu : usr=2.00%, sys=8.70%, ctx=5788, majf=0, minf=3 00:17:09.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.131 issued rwts: total=2716,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.131 job3: (groupid=0, jobs=1): err= 0: pid=76720: Sat Dec 14 06:47:23 2024 00:17:09.131 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:17:09.131 slat (nsec): min=13088, max=57799, avg=16978.64, stdev=4719.06 00:17:09.131 clat (usec): min=149, max=1777, avg=181.36, stdev=39.40 00:17:09.131 lat (usec): min=164, max=1791, avg=198.34, stdev=39.80 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:17:09.131 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:17:09.131 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:17:09.131 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 251], 99.95th=[ 1057], 00:17:09.131 | 99.99th=[ 1778] 00:17:09.131 write: IOPS=2893, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:17:09.131 slat (nsec): min=19191, max=91530, avg=25472.96, stdev=7875.59 00:17:09.131 clat (usec): min=108, max=220, avg=141.19, stdev=15.63 00:17:09.131 lat (usec): min=131, max=267, avg=166.66, stdev=18.35 00:17:09.131 clat percentiles (usec): 00:17:09.131 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 129], 00:17:09.131 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:17:09.131 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 00:17:09.131 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 212], 99.95th=[ 215], 00:17:09.131 | 99.99th=[ 221] 00:17:09.131 bw ( KiB/s): min=12288, max=12288, per=30.56%, avg=12288.00, stdev= 0.00, samples=1 00:17:09.132 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:09.132 lat (usec) : 250=99.95%, 500=0.02% 00:17:09.132 lat (msec) : 2=0.04% 00:17:09.132 cpu : usr=1.50%, sys=9.20%, ctx=5457, majf=0, minf=17 00:17:09.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.132 issued rwts: total=2560,2896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.132 00:17:09.132 Run status group 0 (all jobs): 00:17:09.132 READ: bw=35.7MiB/s (37.5MB/s), 7540KiB/s-10.6MiB/s (7721kB/s-11.1MB/s), io=35.8MiB (37.5MB), run=1001-1001msec 00:17:09.132 WRITE: bw=39.3MiB/s (41.2MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.3MiB (41.2MB), run=1001-1001msec 00:17:09.132 00:17:09.132 Disk stats (read/write): 00:17:09.132 nvme0n1: ios=1586/1884, merge=0/0, ticks=441/383, in_queue=824, util=87.78% 00:17:09.132 nvme0n2: ios=1578/2000, merge=0/0, ticks=424/413, in_queue=837, util=88.45% 00:17:09.132 nvme0n3: ios=2406/2560, merge=0/0, ticks=416/381, in_queue=797, util=89.25% 00:17:09.132 nvme0n4: ios=2144/2560, merge=0/0, ticks=411/394, in_queue=805, util=89.80% 00:17:09.132 06:47:23 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:09.132 [global] 00:17:09.132 thread=1 00:17:09.132 invalidate=1 00:17:09.132 rw=randwrite 00:17:09.132 time_based=1 00:17:09.132 runtime=1 00:17:09.132 ioengine=libaio 00:17:09.132 direct=1 00:17:09.132 bs=4096 00:17:09.132 iodepth=1 00:17:09.132 norandommap=0 00:17:09.132 numjobs=1 00:17:09.132 00:17:09.132 verify_dump=1 00:17:09.132 verify_backlog=512 00:17:09.132 verify_state_save=0 00:17:09.132 do_verify=1 00:17:09.132 verify=crc32c-intel 00:17:09.132 [job0] 00:17:09.132 filename=/dev/nvme0n1 00:17:09.132 [job1] 00:17:09.132 filename=/dev/nvme0n2 00:17:09.132 [job2] 00:17:09.132 filename=/dev/nvme0n3 00:17:09.132 [job3] 00:17:09.132 filename=/dev/nvme0n4 00:17:09.390 Could not set queue depth (nvme0n1) 00:17:09.390 Could not set queue depth (nvme0n2) 00:17:09.390 Could not set queue depth (nvme0n3) 00:17:09.390 Could not set queue depth (nvme0n4) 00:17:09.390 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.390 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.391 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.391 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.391 fio-3.35 00:17:09.391 Starting 4 threads 00:17:10.767 00:17:10.767 job0: (groupid=0, jobs=1): err= 0: pid=76774: Sat Dec 14 06:47:24 2024 00:17:10.767 read: IOPS=1541, BW=6166KiB/s (6314kB/s)(6172KiB/1001msec) 00:17:10.767 slat (nsec): min=14629, max=47980, avg=19008.31, stdev=3830.81 00:17:10.767 clat (usec): min=148, max=2522, avg=288.94, stdev=66.64 00:17:10.767 lat (usec): min=166, max=2539, avg=307.95, stdev=66.86 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 161], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 269], 00:17:10.767 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:17:10.767 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 00:17:10.767 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 2507], 00:17:10.767 | 99.99th=[ 2507] 00:17:10.767 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:10.767 slat (usec): min=18, max=191, avg=28.74, stdev= 7.70 00:17:10.767 clat (usec): min=102, max=926, avg=223.45, stdev=33.61 00:17:10.767 lat (usec): min=122, max=958, avg=252.19, stdev=35.26 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 131], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 204], 00:17:10.767 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:17:10.767 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:17:10.767 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 529], 99.95th=[ 644], 00:17:10.767 | 99.99th=[ 930] 00:17:10.767 bw ( KiB/s): min= 8192, max= 8192, per=21.73%, avg=8192.00, stdev= 0.00, samples=1 00:17:10.767 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:10.767 lat (usec) : 250=51.27%, 500=48.62%, 750=0.06%, 1000=0.03% 00:17:10.767 lat (msec) : 4=0.03% 00:17:10.767 cpu : usr=1.80%, sys=6.60%, ctx=3592, majf=0, minf=15 00:17:10.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.767 issued rwts: total=1543,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.767 job1: (groupid=0, jobs=1): err= 0: pid=76775: Sat Dec 14 06:47:24 2024 00:17:10.767 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:17:10.767 slat (nsec): min=13106, max=59728, avg=16284.49, stdev=5155.86 00:17:10.767 clat (usec): min=145, max=3336, avg=194.61, stdev=82.02 00:17:10.767 lat (usec): min=159, max=3363, avg=210.90, stdev=82.52 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:17:10.767 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:17:10.767 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 231], 00:17:10.767 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 1237], 99.95th=[ 2278], 00:17:10.767 | 99.99th=[ 3326] 00:17:10.767 write: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:17:10.767 slat (nsec): min=18165, max=95373, avg=23457.84, stdev=7313.40 00:17:10.767 clat (usec): min=106, max=7739, avg=149.21, stdev=172.63 00:17:10.767 lat (usec): min=128, max=7758, avg=172.67, stdev=172.76 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:17:10.767 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:17:10.767 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 174], 00:17:10.767 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 2507], 99.95th=[ 3425], 00:17:10.767 | 99.99th=[ 7767] 00:17:10.767 bw ( KiB/s): min=12288, max=12288, per=32.60%, avg=12288.00, stdev= 0.00, samples=1 00:17:10.767 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:10.767 lat (usec) : 250=99.09%, 500=0.68%, 750=0.08% 00:17:10.767 lat (msec) : 2=0.06%, 4=0.08%, 10=0.02% 00:17:10.767 cpu : usr=2.20%, sys=7.20%, ctx=5175, majf=0, minf=7 00:17:10.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.767 issued rwts: total=2560,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.767 job2: (groupid=0, jobs=1): err= 0: pid=76776: Sat Dec 14 06:47:24 2024 00:17:10.767 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:17:10.767 slat (nsec): min=12388, max=63976, avg=17299.32, stdev=4108.36 00:17:10.767 clat (usec): min=134, max=12642, avg=300.13, stdev=318.73 00:17:10.767 lat (usec): min=147, max=12657, avg=317.43, stdev=318.70 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 202], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:17:10.767 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:17:10.767 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:17:10.767 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 1745], 99.95th=[12649], 00:17:10.767 | 99.99th=[12649] 00:17:10.767 write: IOPS=2002, BW=8012KiB/s (8204kB/s)(8020KiB/1001msec); 0 zone resets 00:17:10.767 slat (usec): min=15, max=177, avg=25.37, stdev= 7.62 00:17:10.767 clat (usec): min=121, max=913, avg=226.64, stdev=30.87 00:17:10.767 lat (usec): min=151, max=932, avg=252.00, stdev=31.67 00:17:10.767 clat percentiles (usec): 00:17:10.767 | 1.00th=[ 149], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:17:10.767 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:17:10.767 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:17:10.767 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 457], 99.95th=[ 627], 00:17:10.767 | 99.99th=[ 914] 00:17:10.768 bw ( KiB/s): min= 8192, max= 8192, per=21.73%, avg=8192.00, stdev= 0.00, samples=1 00:17:10.768 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:10.768 lat (usec) : 250=49.93%, 500=49.90%, 750=0.08%, 1000=0.03% 00:17:10.768 lat (msec) : 2=0.03%, 20=0.03% 00:17:10.768 cpu : usr=1.90%, sys=5.40%, ctx=3541, majf=0, minf=9 00:17:10.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.768 issued rwts: total=1536,2005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.768 job3: (groupid=0, jobs=1): err= 0: pid=76777: Sat Dec 14 06:47:24 2024 00:17:10.768 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:17:10.768 slat (nsec): min=13317, max=68444, avg=17175.87, stdev=5747.23 00:17:10.768 clat (usec): min=142, max=3097, avg=189.54, stdev=69.58 00:17:10.768 lat (usec): min=157, max=3123, avg=206.72, stdev=70.15 00:17:10.768 clat percentiles (usec): 00:17:10.768 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:17:10.768 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:17:10.768 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 225], 00:17:10.768 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 807], 99.95th=[ 1795], 00:17:10.768 | 99.99th=[ 3097] 00:17:10.768 write: IOPS=2765, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:17:10.768 slat (nsec): min=19184, max=93834, avg=24140.95, stdev=7030.03 00:17:10.768 clat (usec): min=97, max=2167, avg=142.75, stdev=41.79 00:17:10.768 lat (usec): min=125, max=2191, avg=166.89, stdev=42.61 00:17:10.768 clat percentiles (usec): 00:17:10.768 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 130], 00:17:10.768 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:17:10.768 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 00:17:10.768 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 219], 99.95th=[ 322], 00:17:10.768 | 99.99th=[ 2180] 00:17:10.768 bw ( KiB/s): min=12288, max=12288, per=32.60%, avg=12288.00, stdev= 0.00, samples=1 00:17:10.768 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:10.768 lat (usec) : 100=0.02%, 250=99.55%, 500=0.36%, 1000=0.02% 00:17:10.768 lat (msec) : 2=0.02%, 4=0.04% 00:17:10.768 cpu : usr=1.80%, sys=8.10%, ctx=5329, majf=0, minf=15 00:17:10.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.768 issued rwts: total=2560,2768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.768 00:17:10.768 Run status group 0 (all jobs): 00:17:10.768 READ: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:17:10.768 WRITE: bw=36.8MiB/s (38.6MB/s), 8012KiB/s-10.8MiB/s (8204kB/s-11.3MB/s), io=36.9MiB (38.6MB), run=1001-1001msec 00:17:10.768 00:17:10.768 Disk stats (read/write): 00:17:10.768 nvme0n1: ios=1544/1536, merge=0/0, ticks=476/377, in_queue=853, util=88.26% 00:17:10.768 nvme0n2: ios=2094/2505, merge=0/0, ticks=452/407, in_queue=859, util=89.45% 00:17:10.768 nvme0n3: ios=1527/1536, merge=0/0, ticks=478/375, in_queue=853, util=89.61% 00:17:10.768 nvme0n4: ios=2106/2560, merge=0/0, ticks=414/390, in_queue=804, util=89.67% 00:17:10.768 06:47:24 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:10.768 [global] 00:17:10.768 thread=1 00:17:10.768 invalidate=1 00:17:10.768 rw=write 00:17:10.768 time_based=1 00:17:10.768 runtime=1 00:17:10.768 ioengine=libaio 00:17:10.768 direct=1 00:17:10.768 bs=4096 00:17:10.768 iodepth=128 00:17:10.768 norandommap=0 00:17:10.768 numjobs=1 00:17:10.768 00:17:10.768 verify_dump=1 00:17:10.768 verify_backlog=512 00:17:10.768 verify_state_save=0 00:17:10.768 do_verify=1 00:17:10.768 verify=crc32c-intel 00:17:10.768 [job0] 00:17:10.768 filename=/dev/nvme0n1 00:17:10.768 [job1] 00:17:10.768 filename=/dev/nvme0n2 00:17:10.768 [job2] 00:17:10.768 filename=/dev/nvme0n3 00:17:10.768 [job3] 00:17:10.768 filename=/dev/nvme0n4 00:17:10.768 Could not set queue depth (nvme0n1) 00:17:10.768 Could not set queue depth (nvme0n2) 00:17:10.768 Could not set queue depth (nvme0n3) 00:17:10.768 Could not set queue depth (nvme0n4) 00:17:10.768 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.768 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.768 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.768 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.768 fio-3.35 00:17:10.768 Starting 4 threads 00:17:12.153 00:17:12.153 job0: (groupid=0, jobs=1): err= 0: pid=76838: Sat Dec 14 06:47:25 2024 00:17:12.153 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:17:12.153 slat (usec): min=3, max=10864, avg=235.85, stdev=1091.63 00:17:12.153 clat (usec): min=19400, max=43587, avg=29758.08, stdev=3738.71 00:17:12.153 lat (usec): min=19433, max=45446, avg=29993.93, stdev=3812.09 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[21890], 5.00th=[25035], 10.00th=[25822], 20.00th=[27395], 00:17:12.153 | 30.00th=[27919], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:17:12.153 | 70.00th=[31065], 80.00th=[31851], 90.00th=[34341], 95.00th=[38536], 00:17:12.153 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:17:12.153 | 99.99th=[43779] 00:17:12.153 write: IOPS=2190, BW=8763KiB/s (8973kB/s)(8824KiB/1007msec); 0 zone resets 00:17:12.153 slat (usec): min=6, max=11711, avg=225.95, stdev=1066.08 00:17:12.153 clat (usec): min=5310, max=45207, avg=29684.38, stdev=4989.18 00:17:12.153 lat (usec): min=6246, max=45250, avg=29910.33, stdev=5087.96 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[13304], 5.00th=[20317], 10.00th=[22414], 20.00th=[27132], 00:17:12.153 | 30.00th=[28181], 40.00th=[29230], 50.00th=[29754], 60.00th=[31065], 00:17:12.153 | 70.00th=[32113], 80.00th=[33817], 90.00th=[34866], 95.00th=[36439], 00:17:12.153 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43779], 99.95th=[44303], 00:17:12.153 | 99.99th=[45351] 00:17:12.153 bw ( KiB/s): min= 8304, max= 8328, per=17.82%, avg=8316.00, stdev=16.97, samples=2 00:17:12.153 iops : min= 2076, max= 2082, avg=2079.00, stdev= 4.24, samples=2 00:17:12.153 lat (msec) : 10=0.16%, 20=2.23%, 50=97.60% 00:17:12.153 cpu : usr=3.18%, sys=6.06%, ctx=480, majf=0, minf=15 00:17:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.153 issued rwts: total=2048,2206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.153 job1: (groupid=0, jobs=1): err= 0: pid=76839: Sat Dec 14 06:47:25 2024 00:17:12.153 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:17:12.153 slat (usec): min=6, max=6151, avg=147.17, stdev=713.11 00:17:12.153 clat (usec): min=13567, max=23225, avg=19815.81, stdev=1466.20 00:17:12.153 lat (usec): min=15466, max=26569, avg=19962.98, stdev=1311.16 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[15533], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:17:12.153 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:17:12.153 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21627], 95.00th=[22152], 00:17:12.153 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:17:12.153 | 99.99th=[23200] 00:17:12.153 write: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1005msec); 0 zone resets 00:17:12.153 slat (usec): min=14, max=5448, avg=155.90, stdev=702.13 00:17:12.153 clat (usec): min=4521, max=24730, avg=19793.86, stdev=2764.03 00:17:12.153 lat (usec): min=4545, max=24757, avg=19949.76, stdev=2756.67 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[ 9241], 5.00th=[16188], 10.00th=[16909], 20.00th=[17695], 00:17:12.153 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19792], 60.00th=[20841], 00:17:12.153 | 70.00th=[21627], 80.00th=[22152], 90.00th=[22938], 95.00th=[23462], 00:17:12.153 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:17:12.153 | 99.99th=[24773] 00:17:12.153 bw ( KiB/s): min=12760, max=12800, per=27.38%, avg=12780.00, stdev=28.28, samples=2 00:17:12.153 iops : min= 3190, max= 3200, avg=3195.00, stdev= 7.07, samples=2 00:17:12.153 lat (msec) : 10=0.55%, 20=48.06%, 50=51.39% 00:17:12.153 cpu : usr=3.49%, sys=11.06%, ctx=394, majf=0, minf=11 00:17:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.153 issued rwts: total=3072,3322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.153 job2: (groupid=0, jobs=1): err= 0: pid=76840: Sat Dec 14 06:47:25 2024 00:17:12.153 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:17:12.153 slat (usec): min=3, max=10268, avg=233.50, stdev=988.00 00:17:12.153 clat (usec): min=23410, max=40489, avg=29885.91, stdev=2883.18 00:17:12.153 lat (usec): min=23425, max=47320, avg=30119.42, stdev=2960.88 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[24773], 5.00th=[25822], 10.00th=[26870], 20.00th=[27657], 00:17:12.153 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[30016], 00:17:12.153 | 70.00th=[31065], 80.00th=[32113], 90.00th=[33162], 95.00th=[35390], 00:17:12.153 | 99.00th=[39060], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:17:12.153 | 99.99th=[40633] 00:17:12.153 write: IOPS=2121, BW=8484KiB/s (8688kB/s)(8552KiB/1008msec); 0 zone resets 00:17:12.153 slat (usec): min=4, max=10694, avg=235.31, stdev=1079.79 00:17:12.153 clat (usec): min=7179, max=44064, avg=30451.28, stdev=4211.28 00:17:12.153 lat (usec): min=7199, max=44100, avg=30686.58, stdev=4311.01 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[11076], 5.00th=[25297], 10.00th=[26346], 20.00th=[27919], 00:17:12.153 | 30.00th=[28705], 40.00th=[29492], 50.00th=[30802], 60.00th=[31589], 00:17:12.153 | 70.00th=[32900], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:17:12.153 | 99.00th=[38011], 99.50th=[39584], 99.90th=[43254], 99.95th=[43779], 00:17:12.153 | 99.99th=[44303] 00:17:12.153 bw ( KiB/s): min= 8192, max= 8208, per=17.57%, avg=8200.00, stdev=11.31, samples=2 00:17:12.153 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:17:12.153 lat (msec) : 10=0.36%, 20=0.93%, 50=98.71% 00:17:12.153 cpu : usr=2.98%, sys=5.86%, ctx=657, majf=0, minf=15 00:17:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.153 issued rwts: total=2048,2138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.153 job3: (groupid=0, jobs=1): err= 0: pid=76841: Sat Dec 14 06:47:25 2024 00:17:12.153 read: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1004msec) 00:17:12.153 slat (usec): min=5, max=10214, avg=127.11, stdev=768.63 00:17:12.153 clat (usec): min=791, max=26112, avg=16430.74, stdev=1765.43 00:17:12.153 lat (usec): min=8088, max=26400, avg=16557.85, stdev=1837.13 00:17:12.153 clat percentiles (usec): 00:17:12.153 | 1.00th=[11338], 5.00th=[13829], 10.00th=[14877], 20.00th=[15401], 00:17:12.153 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:17:12.153 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18744], 95.00th=[19006], 00:17:12.154 | 99.00th=[21890], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:17:12.154 | 99.99th=[26084] 00:17:12.154 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:12.154 slat (usec): min=12, max=7921, avg=124.99, stdev=730.28 00:17:12.154 clat (usec): min=8108, max=25571, avg=16538.84, stdev=2286.53 00:17:12.154 lat (usec): min=8146, max=25628, avg=16663.84, stdev=2259.89 00:17:12.154 clat percentiles (usec): 00:17:12.154 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[14484], 20.00th=[15401], 00:17:12.154 | 30.00th=[15926], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:17:12.154 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:17:12.154 | 99.00th=[21890], 99.50th=[23725], 99.90th=[24511], 99.95th=[24773], 00:17:12.154 | 99.99th=[25560] 00:17:12.154 bw ( KiB/s): min=15440, max=16416, per=34.13%, avg=15928.00, stdev=690.14, samples=2 00:17:12.154 iops : min= 3860, max= 4104, avg=3982.00, stdev=172.53, samples=2 00:17:12.154 lat (usec) : 1000=0.01% 00:17:12.154 lat (msec) : 10=1.47%, 20=96.57%, 50=1.95% 00:17:12.154 cpu : usr=4.89%, sys=11.37%, ctx=274, majf=0, minf=12 00:17:12.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:12.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.154 issued rwts: total=3594,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.154 00:17:12.154 Run status group 0 (all jobs): 00:17:12.154 READ: bw=41.7MiB/s (43.7MB/s), 8127KiB/s-14.0MiB/s (8322kB/s-14.7MB/s), io=42.0MiB (44.1MB), run=1004-1008msec 00:17:12.154 WRITE: bw=45.6MiB/s (47.8MB/s), 8484KiB/s-15.9MiB/s (8688kB/s-16.7MB/s), io=45.9MiB (48.2MB), run=1004-1008msec 00:17:12.154 00:17:12.154 Disk stats (read/write): 00:17:12.154 nvme0n1: ios=1659/2048, merge=0/0, ticks=15491/18051, in_queue=33542, util=88.68% 00:17:12.154 nvme0n2: ios=2609/2959, merge=0/0, ticks=12216/13349, in_queue=25565, util=89.80% 00:17:12.154 nvme0n3: ios=1549/2045, merge=0/0, ticks=14656/18722, in_queue=33378, util=88.83% 00:17:12.154 nvme0n4: ios=3078/3521, merge=0/0, ticks=23353/25092, in_queue=48445, util=89.78% 00:17:12.154 06:47:25 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:12.154 [global] 00:17:12.154 thread=1 00:17:12.154 invalidate=1 00:17:12.154 rw=randwrite 00:17:12.154 time_based=1 00:17:12.154 runtime=1 00:17:12.154 ioengine=libaio 00:17:12.154 direct=1 00:17:12.154 bs=4096 00:17:12.154 iodepth=128 00:17:12.154 norandommap=0 00:17:12.154 numjobs=1 00:17:12.154 00:17:12.154 verify_dump=1 00:17:12.154 verify_backlog=512 00:17:12.154 verify_state_save=0 00:17:12.154 do_verify=1 00:17:12.154 verify=crc32c-intel 00:17:12.154 [job0] 00:17:12.154 filename=/dev/nvme0n1 00:17:12.154 [job1] 00:17:12.154 filename=/dev/nvme0n2 00:17:12.154 [job2] 00:17:12.154 filename=/dev/nvme0n3 00:17:12.154 [job3] 00:17:12.154 filename=/dev/nvme0n4 00:17:12.154 Could not set queue depth (nvme0n1) 00:17:12.154 Could not set queue depth (nvme0n2) 00:17:12.154 Could not set queue depth (nvme0n3) 00:17:12.154 Could not set queue depth (nvme0n4) 00:17:12.154 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.154 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.154 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.154 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.154 fio-3.35 00:17:12.154 Starting 4 threads 00:17:13.534 00:17:13.534 job0: (groupid=0, jobs=1): err= 0: pid=76894: Sat Dec 14 06:47:27 2024 00:17:13.534 read: IOPS=4080, BW=15.9MiB/s (16.7MB/s)(16.2MiB/1015msec) 00:17:13.534 slat (usec): min=3, max=13734, avg=112.31, stdev=772.75 00:17:13.534 clat (usec): min=4816, max=28707, avg=14988.10, stdev=3466.77 00:17:13.534 lat (usec): min=4829, max=28735, avg=15100.40, stdev=3504.60 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11731], 20.00th=[12649], 00:17:13.534 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14091], 60.00th=[14877], 00:17:13.534 | 70.00th=[16188], 80.00th=[17695], 90.00th=[19006], 95.00th=[21365], 00:17:13.534 | 99.00th=[26870], 99.50th=[27657], 99.90th=[28443], 99.95th=[28705], 00:17:13.534 | 99.99th=[28705] 00:17:13.534 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1015msec); 0 zone resets 00:17:13.534 slat (usec): min=6, max=12991, avg=107.68, stdev=719.62 00:17:13.534 clat (usec): min=3670, max=30573, avg=14433.83, stdev=3227.14 00:17:13.534 lat (usec): min=3697, max=30584, avg=14541.50, stdev=3302.63 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[ 5211], 5.00th=[ 7963], 10.00th=[10683], 20.00th=[13042], 00:17:13.534 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14877], 60.00th=[15401], 00:17:13.534 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:17:13.534 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30540], 99.95th=[30540], 00:17:13.534 | 99.99th=[30540] 00:17:13.534 bw ( KiB/s): min=17016, max=19200, per=39.09%, avg=18108.00, stdev=1544.32, samples=2 00:17:13.534 iops : min= 4254, max= 4800, avg=4527.00, stdev=386.08, samples=2 00:17:13.534 lat (msec) : 4=0.05%, 10=5.83%, 20=89.60%, 50=4.53% 00:17:13.534 cpu : usr=3.94%, sys=13.41%, ctx=454, majf=0, minf=7 00:17:13.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:13.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.534 issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.534 job1: (groupid=0, jobs=1): err= 0: pid=76895: Sat Dec 14 06:47:27 2024 00:17:13.534 read: IOPS=1510, BW=6041KiB/s (6186kB/s)(6144KiB/1017msec) 00:17:13.534 slat (usec): min=3, max=28274, avg=310.41, stdev=1949.16 00:17:13.534 clat (usec): min=19472, max=67786, avg=39375.67, stdev=8918.54 00:17:13.534 lat (usec): min=19537, max=67835, avg=39686.08, stdev=9104.60 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[20055], 5.00th=[26084], 10.00th=[29492], 20.00th=[30540], 00:17:13.534 | 30.00th=[31589], 40.00th=[38536], 50.00th=[41157], 60.00th=[42206], 00:17:13.534 | 70.00th=[43254], 80.00th=[44827], 90.00th=[50594], 95.00th=[55837], 00:17:13.534 | 99.00th=[64226], 99.50th=[65274], 99.90th=[66847], 99.95th=[67634], 00:17:13.534 | 99.99th=[67634] 00:17:13.534 write: IOPS=1910, BW=7642KiB/s (7825kB/s)(7772KiB/1017msec); 0 zone resets 00:17:13.534 slat (usec): min=6, max=16184, avg=260.94, stdev=1427.51 00:17:13.534 clat (usec): min=16848, max=73309, avg=35112.48, stdev=7543.67 00:17:13.534 lat (usec): min=16876, max=73350, avg=35373.42, stdev=7615.90 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[17171], 5.00th=[21103], 10.00th=[27657], 20.00th=[29492], 00:17:13.534 | 30.00th=[32375], 40.00th=[33162], 50.00th=[33817], 60.00th=[35914], 00:17:13.534 | 70.00th=[38536], 80.00th=[40633], 90.00th=[44303], 95.00th=[46400], 00:17:13.534 | 99.00th=[57934], 99.50th=[57934], 99.90th=[66847], 99.95th=[72877], 00:17:13.534 | 99.99th=[72877] 00:17:13.534 bw ( KiB/s): min= 6336, max= 8192, per=15.68%, avg=7264.00, stdev=1312.39, samples=2 00:17:13.534 iops : min= 1584, max= 2048, avg=1816.00, stdev=328.10, samples=2 00:17:13.534 lat (msec) : 20=3.08%, 50=90.69%, 100=6.24% 00:17:13.534 cpu : usr=1.97%, sys=5.61%, ctx=398, majf=0, minf=13 00:17:13.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:13.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.534 issued rwts: total=1536,1943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.534 job2: (groupid=0, jobs=1): err= 0: pid=76896: Sat Dec 14 06:47:27 2024 00:17:13.534 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:17:13.534 slat (usec): min=3, max=12529, avg=148.10, stdev=857.58 00:17:13.534 clat (usec): min=10003, max=49778, avg=19301.09, stdev=7113.68 00:17:13.534 lat (usec): min=10019, max=61665, avg=19449.19, stdev=7192.50 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[11076], 5.00th=[13173], 10.00th=[15008], 20.00th=[15795], 00:17:13.534 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16581], 60.00th=[17171], 00:17:13.534 | 70.00th=[17433], 80.00th=[21103], 90.00th=[30540], 95.00th=[35914], 00:17:13.534 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:17:13.534 | 99.99th=[49546] 00:17:13.534 write: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1009msec); 0 zone resets 00:17:13.534 slat (usec): min=6, max=20618, avg=161.98, stdev=1040.86 00:17:13.534 clat (usec): min=7815, max=49592, avg=20890.25, stdev=6856.57 00:17:13.534 lat (usec): min=8761, max=49627, avg=21052.23, stdev=6946.83 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[10290], 5.00th=[13304], 10.00th=[15139], 20.00th=[16450], 00:17:13.534 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[18482], 00:17:13.534 | 70.00th=[21890], 80.00th=[29230], 90.00th=[32900], 95.00th=[33817], 00:17:13.534 | 99.00th=[35390], 99.50th=[39060], 99.90th=[46400], 99.95th=[47449], 00:17:13.534 | 99.99th=[49546] 00:17:13.534 bw ( KiB/s): min=10488, max=14088, per=26.52%, avg=12288.00, stdev=2545.58, samples=2 00:17:13.534 iops : min= 2622, max= 3522, avg=3072.00, stdev=636.40, samples=2 00:17:13.534 lat (msec) : 10=0.21%, 20=73.45%, 50=26.35% 00:17:13.534 cpu : usr=2.58%, sys=10.91%, ctx=374, majf=0, minf=13 00:17:13.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:13.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.534 issued rwts: total=3072,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.534 job3: (groupid=0, jobs=1): err= 0: pid=76897: Sat Dec 14 06:47:27 2024 00:17:13.534 read: IOPS=1989, BW=7957KiB/s (8148kB/s)(8100KiB/1018msec) 00:17:13.534 slat (usec): min=4, max=19909, avg=270.21, stdev=1625.45 00:17:13.534 clat (usec): min=1131, max=62448, avg=32891.82, stdev=12115.43 00:17:13.534 lat (usec): min=7948, max=69620, avg=33162.03, stdev=12230.51 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[16188], 5.00th=[18744], 10.00th=[19006], 20.00th=[21103], 00:17:13.534 | 30.00th=[22414], 40.00th=[25297], 50.00th=[30540], 60.00th=[38011], 00:17:13.534 | 70.00th=[40633], 80.00th=[43779], 90.00th=[50070], 95.00th=[54264], 00:17:13.534 | 99.00th=[61080], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:17:13.534 | 99.99th=[62653] 00:17:13.534 write: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec); 0 zone resets 00:17:13.534 slat (usec): min=5, max=17964, avg=219.93, stdev=1153.14 00:17:13.534 clat (usec): min=3468, max=59109, avg=30436.77, stdev=12223.87 00:17:13.534 lat (usec): min=3493, max=59129, avg=30656.71, stdev=12312.89 00:17:13.534 clat percentiles (usec): 00:17:13.534 | 1.00th=[ 8717], 5.00th=[12518], 10.00th=[18744], 20.00th=[20317], 00:17:13.534 | 30.00th=[22152], 40.00th=[23462], 50.00th=[24773], 60.00th=[33817], 00:17:13.534 | 70.00th=[39584], 80.00th=[42730], 90.00th=[46924], 95.00th=[52691], 00:17:13.534 | 99.00th=[57934], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:17:13.534 | 99.99th=[58983] 00:17:13.534 bw ( KiB/s): min= 4816, max=11568, per=17.68%, avg=8192.00, stdev=4774.38, samples=2 00:17:13.534 iops : min= 1204, max= 2892, avg=2048.00, stdev=1193.60, samples=2 00:17:13.534 lat (msec) : 2=0.02%, 4=0.15%, 10=1.64%, 20=14.04%, 50=75.13% 00:17:13.534 lat (msec) : 100=9.01% 00:17:13.534 cpu : usr=1.97%, sys=6.88%, ctx=357, majf=0, minf=12 00:17:13.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:13.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.534 issued rwts: total=2025,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.534 00:17:13.534 Run status group 0 (all jobs): 00:17:13.534 READ: bw=41.3MiB/s (43.4MB/s), 6041KiB/s-15.9MiB/s (6186kB/s-16.7MB/s), io=42.1MiB (44.1MB), run=1009-1018msec 00:17:13.534 WRITE: bw=45.2MiB/s (47.4MB/s), 7642KiB/s-17.7MiB/s (7825kB/s-18.6MB/s), io=46.1MiB (48.3MB), run=1009-1018msec 00:17:13.534 00:17:13.534 Disk stats (read/write): 00:17:13.534 nvme0n1: ios=3634/3847, merge=0/0, ticks=49587/51502, in_queue=101089, util=88.37% 00:17:13.534 nvme0n2: ios=1493/1536, merge=0/0, ticks=27680/23748, in_queue=51428, util=88.04% 00:17:13.534 nvme0n3: ios=2522/2560, merge=0/0, ticks=23588/25161, in_queue=48749, util=86.76% 00:17:13.534 nvme0n4: ios=1536/2044, merge=0/0, ticks=34133/39771, in_queue=73904, util=89.48% 00:17:13.534 06:47:27 -- target/fio.sh@55 -- # sync 00:17:13.534 06:47:27 -- target/fio.sh@59 -- # fio_pid=76913 00:17:13.534 06:47:27 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:13.534 06:47:27 -- target/fio.sh@61 -- # sleep 3 00:17:13.534 [global] 00:17:13.534 thread=1 00:17:13.534 invalidate=1 00:17:13.534 rw=read 00:17:13.534 time_based=1 00:17:13.534 runtime=10 00:17:13.535 ioengine=libaio 00:17:13.535 direct=1 00:17:13.535 bs=4096 00:17:13.535 iodepth=1 00:17:13.535 norandommap=1 00:17:13.535 numjobs=1 00:17:13.535 00:17:13.535 [job0] 00:17:13.535 filename=/dev/nvme0n1 00:17:13.535 [job1] 00:17:13.535 filename=/dev/nvme0n2 00:17:13.535 [job2] 00:17:13.535 filename=/dev/nvme0n3 00:17:13.535 [job3] 00:17:13.535 filename=/dev/nvme0n4 00:17:13.793 Could not set queue depth (nvme0n1) 00:17:13.793 Could not set queue depth (nvme0n2) 00:17:13.793 Could not set queue depth (nvme0n3) 00:17:13.793 Could not set queue depth (nvme0n4) 00:17:13.793 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.793 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.793 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.793 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.793 fio-3.35 00:17:13.793 Starting 4 threads 00:17:17.081 06:47:30 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:17.081 fio: pid=76967, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:17.081 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29827072, buflen=4096 00:17:17.081 06:47:30 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:17.081 fio: pid=76966, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:17.081 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=74289152, buflen=4096 00:17:17.340 06:47:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.340 06:47:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:17.340 fio: pid=76960, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:17.340 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41476096, buflen=4096 00:17:17.598 06:47:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.598 06:47:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:17.857 fio: pid=76962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:17.857 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=38563840, buflen=4096 00:17:17.857 00:17:17.857 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76960: Sat Dec 14 06:47:31 2024 00:17:17.857 read: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(39.6MiB/3494msec) 00:17:17.857 slat (usec): min=3, max=18459, avg=21.93, stdev=257.99 00:17:17.857 clat (usec): min=129, max=6977, avg=321.44, stdev=354.32 00:17:17.857 lat (usec): min=144, max=18743, avg=343.37, stdev=444.66 00:17:17.857 clat percentiles (usec): 00:17:17.857 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:17:17.857 | 30.00th=[ 169], 40.00th=[ 186], 50.00th=[ 277], 60.00th=[ 297], 00:17:17.857 | 70.00th=[ 318], 80.00th=[ 359], 90.00th=[ 519], 95.00th=[ 627], 00:17:17.857 | 99.00th=[ 2057], 99.50th=[ 2606], 99.90th=[ 4178], 99.95th=[ 5276], 00:17:17.857 | 99.99th=[ 6980] 00:17:17.857 bw ( KiB/s): min= 5040, max=21368, per=25.80%, avg=12068.50, stdev=6850.55, samples=6 00:17:17.857 iops : min= 1260, max= 5342, avg=3017.00, stdev=1712.79, samples=6 00:17:17.857 lat (usec) : 250=47.30%, 500=41.74%, 750=7.41%, 1000=1.06% 00:17:17.857 lat (msec) : 2=1.42%, 4=0.94%, 10=0.13% 00:17:17.857 cpu : usr=1.03%, sys=4.09%, ctx=10227, majf=0, minf=1 00:17:17.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 issued rwts: total=10127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.857 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76962: Sat Dec 14 06:47:31 2024 00:17:17.857 read: IOPS=2448, BW=9795KiB/s (10.0MB/s)(36.8MiB/3845msec) 00:17:17.857 slat (usec): min=3, max=10457, avg=23.09, stdev=224.56 00:17:17.857 clat (usec): min=42, max=11264, avg=383.63, stdev=418.21 00:17:17.857 lat (usec): min=149, max=11319, avg=406.73, stdev=476.10 00:17:17.857 clat percentiles (usec): 00:17:17.857 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 204], 20.00th=[ 258], 00:17:17.857 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:17:17.857 | 70.00th=[ 326], 80.00th=[ 392], 90.00th=[ 545], 95.00th=[ 676], 00:17:17.857 | 99.00th=[ 2376], 99.50th=[ 3294], 99.90th=[ 5014], 99.95th=[ 6718], 00:17:17.857 | 99.99th=[11207] 00:17:17.857 bw ( KiB/s): min= 5032, max=12920, per=20.26%, avg=9477.43, stdev=3331.82, samples=7 00:17:17.857 iops : min= 1258, max= 3230, avg=2369.14, stdev=833.14, samples=7 00:17:17.857 lat (usec) : 50=0.01%, 100=0.03%, 250=16.66%, 500=70.83%, 750=7.97% 00:17:17.857 lat (usec) : 1000=1.27% 00:17:17.857 lat (msec) : 2=1.75%, 4=1.26%, 10=0.19%, 20=0.01% 00:17:17.857 cpu : usr=0.78%, sys=3.49%, ctx=9582, majf=0, minf=1 00:17:17.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 issued rwts: total=9416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.857 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76966: Sat Dec 14 06:47:31 2024 00:17:17.857 read: IOPS=5574, BW=21.8MiB/s (22.8MB/s)(70.8MiB/3254msec) 00:17:17.857 slat (usec): min=9, max=14774, avg=15.89, stdev=126.46 00:17:17.857 clat (usec): min=114, max=4082, avg=162.13, stdev=57.36 00:17:17.857 lat (usec): min=127, max=14961, avg=178.02, stdev=139.21 00:17:17.857 clat percentiles (usec): 00:17:17.857 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:17:17.857 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:17:17.857 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 221], 00:17:17.857 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 424], 99.95th=[ 494], 00:17:17.857 | 99.99th=[ 3228] 00:17:17.857 bw ( KiB/s): min=19099, max=24712, per=48.38%, avg=22627.17, stdev=2229.88, samples=6 00:17:17.857 iops : min= 4774, max= 6178, avg=5656.67, stdev=557.71, samples=6 00:17:17.857 lat (usec) : 250=97.83%, 500=2.11%, 750=0.01% 00:17:17.857 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:17:17.857 cpu : usr=1.78%, sys=6.39%, ctx=18146, majf=0, minf=2 00:17:17.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 issued rwts: total=18138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.857 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76967: Sat Dec 14 06:47:31 2024 00:17:17.857 read: IOPS=2469, BW=9877KiB/s (10.1MB/s)(28.4MiB/2949msec) 00:17:17.857 slat (usec): min=7, max=3465, avg=17.46, stdev=50.22 00:17:17.857 clat (nsec): min=1536, max=7003.5k, avg=385719.22, stdev=355741.39 00:17:17.857 lat (usec): min=170, max=7016, avg=403.18, stdev=361.73 00:17:17.857 clat percentiles (usec): 00:17:17.857 | 1.00th=[ 219], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 273], 00:17:17.857 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:17:17.857 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 553], 95.00th=[ 676], 00:17:17.857 | 99.00th=[ 2212], 99.50th=[ 2802], 99.90th=[ 4228], 99.95th=[ 5145], 00:17:17.857 | 99.99th=[ 6980] 00:17:17.857 bw ( KiB/s): min= 5016, max=12928, per=22.35%, avg=10454.40, stdev=3260.72, samples=5 00:17:17.857 iops : min= 1254, max= 3232, avg=2613.60, stdev=815.18, samples=5 00:17:17.857 lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=5.95%, 500=80.68% 00:17:17.857 lat (usec) : 750=8.98%, 1000=1.37% 00:17:17.857 lat (msec) : 2=1.69%, 4=1.15%, 10=0.12% 00:17:17.857 cpu : usr=0.71%, sys=3.77%, ctx=7506, majf=0, minf=2 00:17:17.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.857 issued rwts: total=7283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.857 00:17:17.857 Run status group 0 (all jobs): 00:17:17.857 READ: bw=45.7MiB/s (47.9MB/s), 9795KiB/s-21.8MiB/s (10.0MB/s-22.8MB/s), io=176MiB (184MB), run=2949-3845msec 00:17:17.857 00:17:17.857 Disk stats (read/write): 00:17:17.857 nvme0n1: ios=9796/0, merge=0/0, ticks=3112/0, in_queue=3112, util=94.93% 00:17:17.857 nvme0n2: ios=8541/0, merge=0/0, ticks=3340/0, in_queue=3340, util=95.15% 00:17:17.857 nvme0n3: ios=17461/0, merge=0/0, ticks=2891/0, in_queue=2891, util=96.15% 00:17:17.857 nvme0n4: ios=7029/0, merge=0/0, ticks=2686/0, in_queue=2686, util=96.46% 00:17:17.857 06:47:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.857 06:47:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:18.116 06:47:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.116 06:47:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:18.374 06:47:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.374 06:47:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:18.633 06:47:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.633 06:47:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:18.891 06:47:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.891 06:47:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:19.150 06:47:33 -- target/fio.sh@69 -- # fio_status=0 00:17:19.150 06:47:33 -- target/fio.sh@70 -- # wait 76913 00:17:19.150 06:47:33 -- target/fio.sh@70 -- # fio_status=4 00:17:19.150 06:47:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.150 06:47:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.150 06:47:33 -- common/autotest_common.sh@1208 -- # local i=0 00:17:19.150 06:47:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.150 06:47:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:19.150 06:47:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:19.150 06:47:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.150 06:47:33 -- common/autotest_common.sh@1220 -- # return 0 00:17:19.150 06:47:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:19.150 nvmf hotplug test: fio failed as expected 00:17:19.150 06:47:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:19.150 06:47:33 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.718 06:47:33 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:19.718 06:47:33 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:19.718 06:47:33 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:19.718 06:47:33 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:19.718 06:47:33 -- target/fio.sh@91 -- # nvmftestfini 00:17:19.718 06:47:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:19.718 06:47:33 -- nvmf/common.sh@116 -- # sync 00:17:19.718 06:47:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:19.718 06:47:33 -- nvmf/common.sh@119 -- # set +e 00:17:19.718 06:47:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:19.718 06:47:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:19.718 rmmod nvme_tcp 00:17:19.718 rmmod nvme_fabrics 00:17:19.718 rmmod nvme_keyring 00:17:19.718 06:47:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:19.718 06:47:33 -- nvmf/common.sh@123 -- # set -e 00:17:19.718 06:47:33 -- nvmf/common.sh@124 -- # return 0 00:17:19.718 06:47:33 -- nvmf/common.sh@477 -- # '[' -n 76426 ']' 00:17:19.718 06:47:33 -- nvmf/common.sh@478 -- # killprocess 76426 00:17:19.718 06:47:33 -- common/autotest_common.sh@936 -- # '[' -z 76426 ']' 00:17:19.718 06:47:33 -- common/autotest_common.sh@940 -- # kill -0 76426 00:17:19.718 06:47:33 -- common/autotest_common.sh@941 -- # uname 00:17:19.718 06:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.718 06:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76426 00:17:19.718 killing process with pid 76426 00:17:19.718 06:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.718 06:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.718 06:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76426' 00:17:19.718 06:47:33 -- common/autotest_common.sh@955 -- # kill 76426 00:17:19.718 06:47:33 -- common/autotest_common.sh@960 -- # wait 76426 00:17:19.977 06:47:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:19.977 06:47:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:19.977 06:47:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:19.977 06:47:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.977 06:47:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:19.977 06:47:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.977 06:47:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.977 06:47:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.977 06:47:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:19.977 00:17:19.977 real 0m20.134s 00:17:19.977 user 1m15.903s 00:17:19.977 sys 0m8.656s 00:17:19.977 06:47:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:19.977 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:17:19.977 ************************************ 00:17:19.977 END TEST nvmf_fio_target 00:17:19.977 ************************************ 00:17:19.977 06:47:33 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:19.977 06:47:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:19.977 06:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:19.977 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:17:19.977 ************************************ 00:17:19.977 START TEST nvmf_bdevio 00:17:19.977 ************************************ 00:17:19.977 06:47:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:20.238 * Looking for test storage... 00:17:20.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:20.238 06:47:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:20.238 06:47:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:20.238 06:47:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:20.238 06:47:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:20.238 06:47:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:20.238 06:47:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:20.238 06:47:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:20.238 06:47:34 -- scripts/common.sh@335 -- # IFS=.-: 00:17:20.238 06:47:34 -- scripts/common.sh@335 -- # read -ra ver1 00:17:20.238 06:47:34 -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.238 06:47:34 -- scripts/common.sh@336 -- # read -ra ver2 00:17:20.238 06:47:34 -- scripts/common.sh@337 -- # local 'op=<' 00:17:20.238 06:47:34 -- scripts/common.sh@339 -- # ver1_l=2 00:17:20.238 06:47:34 -- scripts/common.sh@340 -- # ver2_l=1 00:17:20.238 06:47:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:20.238 06:47:34 -- scripts/common.sh@343 -- # case "$op" in 00:17:20.238 06:47:34 -- scripts/common.sh@344 -- # : 1 00:17:20.238 06:47:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:20.238 06:47:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.238 06:47:34 -- scripts/common.sh@364 -- # decimal 1 00:17:20.238 06:47:34 -- scripts/common.sh@352 -- # local d=1 00:17:20.238 06:47:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.238 06:47:34 -- scripts/common.sh@354 -- # echo 1 00:17:20.238 06:47:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:20.238 06:47:34 -- scripts/common.sh@365 -- # decimal 2 00:17:20.238 06:47:34 -- scripts/common.sh@352 -- # local d=2 00:17:20.238 06:47:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.238 06:47:34 -- scripts/common.sh@354 -- # echo 2 00:17:20.238 06:47:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:20.238 06:47:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:20.238 06:47:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:20.238 06:47:34 -- scripts/common.sh@367 -- # return 0 00:17:20.238 06:47:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.238 06:47:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:20.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.238 --rc genhtml_branch_coverage=1 00:17:20.238 --rc genhtml_function_coverage=1 00:17:20.238 --rc genhtml_legend=1 00:17:20.238 --rc geninfo_all_blocks=1 00:17:20.238 --rc geninfo_unexecuted_blocks=1 00:17:20.238 00:17:20.238 ' 00:17:20.238 06:47:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:20.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.238 --rc genhtml_branch_coverage=1 00:17:20.238 --rc genhtml_function_coverage=1 00:17:20.238 --rc genhtml_legend=1 00:17:20.238 --rc geninfo_all_blocks=1 00:17:20.238 --rc geninfo_unexecuted_blocks=1 00:17:20.238 00:17:20.238 ' 00:17:20.238 06:47:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:20.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.238 --rc genhtml_branch_coverage=1 00:17:20.238 --rc genhtml_function_coverage=1 00:17:20.238 --rc genhtml_legend=1 00:17:20.238 --rc geninfo_all_blocks=1 00:17:20.238 --rc geninfo_unexecuted_blocks=1 00:17:20.238 00:17:20.238 ' 00:17:20.238 06:47:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:20.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.238 --rc genhtml_branch_coverage=1 00:17:20.238 --rc genhtml_function_coverage=1 00:17:20.238 --rc genhtml_legend=1 00:17:20.238 --rc geninfo_all_blocks=1 00:17:20.238 --rc geninfo_unexecuted_blocks=1 00:17:20.238 00:17:20.238 ' 00:17:20.238 06:47:34 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.238 06:47:34 -- nvmf/common.sh@7 -- # uname -s 00:17:20.238 06:47:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.238 06:47:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.238 06:47:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.238 06:47:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.238 06:47:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.238 06:47:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.238 06:47:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.238 06:47:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.238 06:47:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.238 06:47:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.238 06:47:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:20.238 06:47:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:20.238 06:47:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.238 06:47:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.238 06:47:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.238 06:47:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.238 06:47:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.238 06:47:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.238 06:47:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.239 06:47:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.239 06:47:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.239 06:47:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.239 06:47:34 -- paths/export.sh@5 -- # export PATH 00:17:20.239 06:47:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.239 06:47:34 -- nvmf/common.sh@46 -- # : 0 00:17:20.239 06:47:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:20.239 06:47:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:20.239 06:47:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:20.239 06:47:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.239 06:47:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.239 06:47:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:20.239 06:47:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:20.239 06:47:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:20.239 06:47:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.239 06:47:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.239 06:47:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:20.239 06:47:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:20.239 06:47:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.239 06:47:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:20.239 06:47:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:20.239 06:47:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:20.239 06:47:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.239 06:47:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.239 06:47:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.239 06:47:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:20.239 06:47:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:20.239 06:47:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:20.239 06:47:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:20.239 06:47:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:20.239 06:47:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:20.239 06:47:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.239 06:47:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.239 06:47:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.239 06:47:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:20.239 06:47:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.239 06:47:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.239 06:47:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.239 06:47:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.239 06:47:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.239 06:47:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.239 06:47:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.239 06:47:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.239 06:47:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:20.239 06:47:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:20.239 Cannot find device "nvmf_tgt_br" 00:17:20.239 06:47:34 -- nvmf/common.sh@154 -- # true 00:17:20.239 06:47:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.239 Cannot find device "nvmf_tgt_br2" 00:17:20.239 06:47:34 -- nvmf/common.sh@155 -- # true 00:17:20.239 06:47:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:20.239 06:47:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:20.239 Cannot find device "nvmf_tgt_br" 00:17:20.239 06:47:34 -- nvmf/common.sh@157 -- # true 00:17:20.239 06:47:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:20.239 Cannot find device "nvmf_tgt_br2" 00:17:20.239 06:47:34 -- nvmf/common.sh@158 -- # true 00:17:20.239 06:47:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:20.501 06:47:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:20.501 06:47:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.501 06:47:34 -- nvmf/common.sh@161 -- # true 00:17:20.501 06:47:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.501 06:47:34 -- nvmf/common.sh@162 -- # true 00:17:20.501 06:47:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.501 06:47:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.501 06:47:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.501 06:47:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.501 06:47:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.501 06:47:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.501 06:47:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.501 06:47:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.501 06:47:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.501 06:47:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:20.501 06:47:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:20.501 06:47:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:20.501 06:47:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:20.501 06:47:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.501 06:47:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.501 06:47:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.501 06:47:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:20.501 06:47:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:20.501 06:47:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.501 06:47:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.501 06:47:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.501 06:47:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.501 06:47:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.760 06:47:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:20.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:20.760 00:17:20.760 --- 10.0.0.2 ping statistics --- 00:17:20.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.760 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:20.760 06:47:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:20.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:20.760 00:17:20.760 --- 10.0.0.3 ping statistics --- 00:17:20.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.760 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:20.760 06:47:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:20.760 00:17:20.760 --- 10.0.0.1 ping statistics --- 00:17:20.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.760 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:20.760 06:47:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.760 06:47:34 -- nvmf/common.sh@421 -- # return 0 00:17:20.760 06:47:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.760 06:47:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.760 06:47:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.760 06:47:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.760 06:47:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.760 06:47:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.760 06:47:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.760 06:47:34 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:20.760 06:47:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.760 06:47:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.760 06:47:34 -- common/autotest_common.sh@10 -- # set +x 00:17:20.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.760 06:47:34 -- nvmf/common.sh@469 -- # nvmfpid=77301 00:17:20.760 06:47:34 -- nvmf/common.sh@470 -- # waitforlisten 77301 00:17:20.760 06:47:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:20.760 06:47:34 -- common/autotest_common.sh@829 -- # '[' -z 77301 ']' 00:17:20.760 06:47:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.760 06:47:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.760 06:47:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.760 06:47:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.760 06:47:34 -- common/autotest_common.sh@10 -- # set +x 00:17:20.760 [2024-12-14 06:47:34.588038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.760 [2024-12-14 06:47:34.588840] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.760 [2024-12-14 06:47:34.735866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.019 [2024-12-14 06:47:34.875599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.019 [2024-12-14 06:47:34.876333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.019 [2024-12-14 06:47:34.876595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.019 [2024-12-14 06:47:34.877303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.019 [2024-12-14 06:47:34.877792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:21.019 [2024-12-14 06:47:34.877912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:21.019 [2024-12-14 06:47:34.878067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.019 [2024-12-14 06:47:34.878058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:21.953 06:47:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.953 06:47:35 -- common/autotest_common.sh@862 -- # return 0 00:17:21.953 06:47:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.953 06:47:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 06:47:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.953 06:47:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.953 06:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 [2024-12-14 06:47:35.656071] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.953 06:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.953 06:47:35 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.953 06:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 Malloc0 00:17:21.953 06:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.953 06:47:35 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.953 06:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 06:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.953 06:47:35 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.953 06:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 06:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.953 06:47:35 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.953 06:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.954 [2024-12-14 06:47:35.734734] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.954 06:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.954 06:47:35 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:21.954 06:47:35 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:21.954 06:47:35 -- nvmf/common.sh@520 -- # config=() 00:17:21.954 06:47:35 -- nvmf/common.sh@520 -- # local subsystem config 00:17:21.954 06:47:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:21.954 06:47:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:21.954 { 00:17:21.954 "params": { 00:17:21.954 "name": "Nvme$subsystem", 00:17:21.954 "trtype": "$TEST_TRANSPORT", 00:17:21.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.954 "adrfam": "ipv4", 00:17:21.954 "trsvcid": "$NVMF_PORT", 00:17:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.954 "hdgst": ${hdgst:-false}, 00:17:21.954 "ddgst": ${ddgst:-false} 00:17:21.954 }, 00:17:21.954 "method": "bdev_nvme_attach_controller" 00:17:21.954 } 00:17:21.954 EOF 00:17:21.954 )") 00:17:21.954 06:47:35 -- nvmf/common.sh@542 -- # cat 00:17:21.954 06:47:35 -- nvmf/common.sh@544 -- # jq . 00:17:21.954 06:47:35 -- nvmf/common.sh@545 -- # IFS=, 00:17:21.954 06:47:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:21.954 "params": { 00:17:21.954 "name": "Nvme1", 00:17:21.954 "trtype": "tcp", 00:17:21.954 "traddr": "10.0.0.2", 00:17:21.954 "adrfam": "ipv4", 00:17:21.954 "trsvcid": "4420", 00:17:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.954 "hdgst": false, 00:17:21.954 "ddgst": false 00:17:21.954 }, 00:17:21.954 "method": "bdev_nvme_attach_controller" 00:17:21.954 }' 00:17:21.954 [2024-12-14 06:47:35.800553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.954 [2024-12-14 06:47:35.800661] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:17:21.954 [2024-12-14 06:47:35.938250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:22.212 [2024-12-14 06:47:36.051869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.212 [2024-12-14 06:47:36.051991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.212 [2024-12-14 06:47:36.051992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.471 [2024-12-14 06:47:36.257934] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:22.471 [2024-12-14 06:47:36.257998] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:22.471 I/O targets: 00:17:22.471 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:22.471 00:17:22.471 00:17:22.471 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.471 http://cunit.sourceforge.net/ 00:17:22.471 00:17:22.471 00:17:22.471 Suite: bdevio tests on: Nvme1n1 00:17:22.471 Test: blockdev write read block ...passed 00:17:22.471 Test: blockdev write zeroes read block ...passed 00:17:22.471 Test: blockdev write zeroes read no split ...passed 00:17:22.471 Test: blockdev write zeroes read split ...passed 00:17:22.471 Test: blockdev write zeroes read split partial ...passed 00:17:22.471 Test: blockdev reset ...[2024-12-14 06:47:36.376440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:22.471 [2024-12-14 06:47:36.376554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc4910 (9): Bad file descriptor 00:17:22.471 passed 00:17:22.471 Test: blockdev write read 8 blocks ...[2024-12-14 06:47:36.387495] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:22.471 passed 00:17:22.471 Test: blockdev write read size > 128k ...passed 00:17:22.471 Test: blockdev write read invalid size ...passed 00:17:22.471 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:22.471 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:22.471 Test: blockdev write read max offset ...passed 00:17:22.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:22.730 Test: blockdev writev readv 8 blocks ...passed 00:17:22.730 Test: blockdev writev readv 30 x 1block ...passed 00:17:22.730 Test: blockdev writev readv block ...passed 00:17:22.730 Test: blockdev writev readv size > 128k ...passed 00:17:22.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:22.730 Test: blockdev comparev and writev ...[2024-12-14 06:47:36.561292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.561341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.561360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.561370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.561696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.561727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.561793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.562115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.562157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.562182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.562191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.562488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.562504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.562519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.730 [2024-12-14 06:47:36.562528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:22.730 passed 00:17:22.730 Test: blockdev nvme passthru rw ...passed 00:17:22.730 Test: blockdev nvme passthru vendor specific ...[2024-12-14 06:47:36.646283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.730 [2024-12-14 06:47:36.646309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:22.730 [2024-12-14 06:47:36.646434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.730 [2024-12-14 06:47:36.646461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:22.731 [2024-12-14 06:47:36.646597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.731 [2024-12-14 06:47:36.646611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:22.731 [2024-12-14 06:47:36.646714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.731 [2024-12-14 06:47:36.646728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:22.731 passed 00:17:22.731 Test: blockdev nvme admin passthru ...passed 00:17:22.731 Test: blockdev copy ...passed 00:17:22.731 00:17:22.731 Run Summary: Type Total Ran Passed Failed Inactive 00:17:22.731 suites 1 1 n/a 0 0 00:17:22.731 tests 23 23 23 0 0 00:17:22.731 asserts 152 152 152 0 n/a 00:17:22.731 00:17:22.731 Elapsed time = 0.904 seconds 00:17:23.297 06:47:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.297 06:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.297 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 06:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.297 06:47:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:23.297 06:47:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:23.297 06:47:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:23.297 06:47:37 -- nvmf/common.sh@116 -- # sync 00:17:23.297 06:47:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:23.297 06:47:37 -- nvmf/common.sh@119 -- # set +e 00:17:23.297 06:47:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:23.297 06:47:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:23.297 rmmod nvme_tcp 00:17:23.297 rmmod nvme_fabrics 00:17:23.297 rmmod nvme_keyring 00:17:23.297 06:47:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:23.297 06:47:37 -- nvmf/common.sh@123 -- # set -e 00:17:23.297 06:47:37 -- nvmf/common.sh@124 -- # return 0 00:17:23.297 06:47:37 -- nvmf/common.sh@477 -- # '[' -n 77301 ']' 00:17:23.297 06:47:37 -- nvmf/common.sh@478 -- # killprocess 77301 00:17:23.297 06:47:37 -- common/autotest_common.sh@936 -- # '[' -z 77301 ']' 00:17:23.297 06:47:37 -- common/autotest_common.sh@940 -- # kill -0 77301 00:17:23.297 06:47:37 -- common/autotest_common.sh@941 -- # uname 00:17:23.297 06:47:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.297 06:47:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77301 00:17:23.297 06:47:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:23.297 06:47:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:23.297 killing process with pid 77301 00:17:23.297 06:47:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77301' 00:17:23.297 06:47:37 -- common/autotest_common.sh@955 -- # kill 77301 00:17:23.297 06:47:37 -- common/autotest_common.sh@960 -- # wait 77301 00:17:23.865 06:47:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:23.865 06:47:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:23.865 06:47:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:23.865 06:47:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.865 06:47:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:23.865 06:47:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.865 06:47:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.865 06:47:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.865 06:47:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:23.865 ************************************ 00:17:23.865 END TEST nvmf_bdevio 00:17:23.865 ************************************ 00:17:23.865 00:17:23.865 real 0m3.686s 00:17:23.865 user 0m13.044s 00:17:23.865 sys 0m0.920s 00:17:23.865 06:47:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:23.865 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 06:47:37 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:23.865 06:47:37 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:23.865 06:47:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:23.865 06:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:23.865 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 ************************************ 00:17:23.865 START TEST nvmf_bdevio_no_huge 00:17:23.865 ************************************ 00:17:23.865 06:47:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:23.865 * Looking for test storage... 00:17:23.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:23.865 06:47:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:23.865 06:47:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:23.865 06:47:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:23.865 06:47:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:23.865 06:47:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:23.865 06:47:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:23.865 06:47:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:23.865 06:47:37 -- scripts/common.sh@335 -- # IFS=.-: 00:17:23.865 06:47:37 -- scripts/common.sh@335 -- # read -ra ver1 00:17:23.865 06:47:37 -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.865 06:47:37 -- scripts/common.sh@336 -- # read -ra ver2 00:17:23.865 06:47:37 -- scripts/common.sh@337 -- # local 'op=<' 00:17:23.865 06:47:37 -- scripts/common.sh@339 -- # ver1_l=2 00:17:23.865 06:47:37 -- scripts/common.sh@340 -- # ver2_l=1 00:17:23.865 06:47:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:23.865 06:47:37 -- scripts/common.sh@343 -- # case "$op" in 00:17:23.865 06:47:37 -- scripts/common.sh@344 -- # : 1 00:17:23.865 06:47:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:23.865 06:47:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.865 06:47:37 -- scripts/common.sh@364 -- # decimal 1 00:17:23.865 06:47:37 -- scripts/common.sh@352 -- # local d=1 00:17:23.865 06:47:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.865 06:47:37 -- scripts/common.sh@354 -- # echo 1 00:17:23.865 06:47:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:23.865 06:47:37 -- scripts/common.sh@365 -- # decimal 2 00:17:23.865 06:47:37 -- scripts/common.sh@352 -- # local d=2 00:17:23.865 06:47:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.865 06:47:37 -- scripts/common.sh@354 -- # echo 2 00:17:23.865 06:47:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:23.865 06:47:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:23.865 06:47:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:23.865 06:47:37 -- scripts/common.sh@367 -- # return 0 00:17:23.865 06:47:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.865 06:47:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.865 --rc genhtml_branch_coverage=1 00:17:23.865 --rc genhtml_function_coverage=1 00:17:23.865 --rc genhtml_legend=1 00:17:23.865 --rc geninfo_all_blocks=1 00:17:23.865 --rc geninfo_unexecuted_blocks=1 00:17:23.865 00:17:23.865 ' 00:17:23.865 06:47:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.865 --rc genhtml_branch_coverage=1 00:17:23.865 --rc genhtml_function_coverage=1 00:17:23.866 --rc genhtml_legend=1 00:17:23.866 --rc geninfo_all_blocks=1 00:17:23.866 --rc geninfo_unexecuted_blocks=1 00:17:23.866 00:17:23.866 ' 00:17:23.866 06:47:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.866 --rc genhtml_branch_coverage=1 00:17:23.866 --rc genhtml_function_coverage=1 00:17:23.866 --rc genhtml_legend=1 00:17:23.866 --rc geninfo_all_blocks=1 00:17:23.866 --rc geninfo_unexecuted_blocks=1 00:17:23.866 00:17:23.866 ' 00:17:23.866 06:47:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.866 --rc genhtml_branch_coverage=1 00:17:23.866 --rc genhtml_function_coverage=1 00:17:23.866 --rc genhtml_legend=1 00:17:23.866 --rc geninfo_all_blocks=1 00:17:23.866 --rc geninfo_unexecuted_blocks=1 00:17:23.866 00:17:23.866 ' 00:17:23.866 06:47:37 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.866 06:47:37 -- nvmf/common.sh@7 -- # uname -s 00:17:23.866 06:47:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.866 06:47:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.866 06:47:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.866 06:47:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.866 06:47:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.866 06:47:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.866 06:47:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.866 06:47:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.866 06:47:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.866 06:47:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.125 06:47:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:24.125 06:47:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:24.125 06:47:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.125 06:47:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.125 06:47:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.125 06:47:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.125 06:47:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.125 06:47:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.125 06:47:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.125 06:47:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.125 06:47:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.125 06:47:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.125 06:47:37 -- paths/export.sh@5 -- # export PATH 00:17:24.125 06:47:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.125 06:47:37 -- nvmf/common.sh@46 -- # : 0 00:17:24.125 06:47:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:24.125 06:47:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:24.125 06:47:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:24.125 06:47:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.125 06:47:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.125 06:47:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:24.125 06:47:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:24.125 06:47:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:24.125 06:47:37 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.125 06:47:37 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.125 06:47:37 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:24.125 06:47:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:24.125 06:47:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.125 06:47:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:24.125 06:47:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:24.125 06:47:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:24.125 06:47:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.125 06:47:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.125 06:47:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.125 06:47:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:24.125 06:47:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:24.125 06:47:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:24.125 06:47:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:24.125 06:47:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:24.125 06:47:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:24.125 06:47:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.125 06:47:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.125 06:47:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:24.125 06:47:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:24.125 06:47:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.125 06:47:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.125 06:47:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.125 06:47:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.125 06:47:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.125 06:47:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.125 06:47:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.125 06:47:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.125 06:47:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:24.125 06:47:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:24.125 Cannot find device "nvmf_tgt_br" 00:17:24.125 06:47:37 -- nvmf/common.sh@154 -- # true 00:17:24.125 06:47:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.125 Cannot find device "nvmf_tgt_br2" 00:17:24.125 06:47:37 -- nvmf/common.sh@155 -- # true 00:17:24.125 06:47:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:24.125 06:47:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:24.125 Cannot find device "nvmf_tgt_br" 00:17:24.126 06:47:37 -- nvmf/common.sh@157 -- # true 00:17:24.126 06:47:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:24.126 Cannot find device "nvmf_tgt_br2" 00:17:24.126 06:47:37 -- nvmf/common.sh@158 -- # true 00:17:24.126 06:47:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:24.126 06:47:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:24.126 06:47:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.126 06:47:38 -- nvmf/common.sh@161 -- # true 00:17:24.126 06:47:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.126 06:47:38 -- nvmf/common.sh@162 -- # true 00:17:24.126 06:47:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.126 06:47:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.126 06:47:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.126 06:47:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.126 06:47:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.126 06:47:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.126 06:47:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.126 06:47:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:24.126 06:47:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:24.126 06:47:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:24.126 06:47:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:24.126 06:47:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:24.126 06:47:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:24.126 06:47:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.385 06:47:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.385 06:47:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.385 06:47:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:24.385 06:47:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:24.385 06:47:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.385 06:47:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.385 06:47:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.385 06:47:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.385 06:47:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.385 06:47:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:24.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:24.385 00:17:24.385 --- 10.0.0.2 ping statistics --- 00:17:24.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.385 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:24.385 06:47:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:24.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:24.385 00:17:24.385 --- 10.0.0.3 ping statistics --- 00:17:24.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.385 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:24.385 06:47:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:24.385 00:17:24.385 --- 10.0.0.1 ping statistics --- 00:17:24.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.385 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:24.385 06:47:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.385 06:47:38 -- nvmf/common.sh@421 -- # return 0 00:17:24.385 06:47:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:24.385 06:47:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.385 06:47:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:24.385 06:47:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:24.385 06:47:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.385 06:47:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:24.385 06:47:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:24.385 06:47:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:24.385 06:47:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.385 06:47:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.385 06:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.385 06:47:38 -- nvmf/common.sh@469 -- # nvmfpid=77557 00:17:24.385 06:47:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:24.385 06:47:38 -- nvmf/common.sh@470 -- # waitforlisten 77557 00:17:24.385 06:47:38 -- common/autotest_common.sh@829 -- # '[' -z 77557 ']' 00:17:24.385 06:47:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.385 06:47:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.385 06:47:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.385 06:47:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.385 06:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.385 [2024-12-14 06:47:38.299926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.385 [2024-12-14 06:47:38.300069] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:24.644 [2024-12-14 06:47:38.454886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.903 [2024-12-14 06:47:38.646950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:24.903 [2024-12-14 06:47:38.647108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.903 [2024-12-14 06:47:38.647120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.903 [2024-12-14 06:47:38.647128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.903 [2024-12-14 06:47:38.647287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:24.903 [2024-12-14 06:47:38.647437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:24.903 [2024-12-14 06:47:38.647566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:24.903 [2024-12-14 06:47:38.647570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.471 06:47:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.471 06:47:39 -- common/autotest_common.sh@862 -- # return 0 00:17:25.471 06:47:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:25.471 06:47:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 06:47:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.471 06:47:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.471 06:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 [2024-12-14 06:47:39.311679] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.471 06:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 06:47:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.471 06:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 Malloc0 00:17:25.471 06:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 06:47:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:25.471 06:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 06:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 06:47:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.471 06:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 06:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 06:47:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.471 06:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.471 06:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.471 [2024-12-14 06:47:39.352245] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.471 06:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.471 06:47:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:25.471 06:47:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:25.472 06:47:39 -- nvmf/common.sh@520 -- # config=() 00:17:25.472 06:47:39 -- nvmf/common.sh@520 -- # local subsystem config 00:17:25.472 06:47:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:25.472 06:47:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:25.472 { 00:17:25.472 "params": { 00:17:25.472 "name": "Nvme$subsystem", 00:17:25.472 "trtype": "$TEST_TRANSPORT", 00:17:25.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:25.472 "adrfam": "ipv4", 00:17:25.472 "trsvcid": "$NVMF_PORT", 00:17:25.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:25.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:25.472 "hdgst": ${hdgst:-false}, 00:17:25.472 "ddgst": ${ddgst:-false} 00:17:25.472 }, 00:17:25.472 "method": "bdev_nvme_attach_controller" 00:17:25.472 } 00:17:25.472 EOF 00:17:25.472 )") 00:17:25.472 06:47:39 -- nvmf/common.sh@542 -- # cat 00:17:25.472 06:47:39 -- nvmf/common.sh@544 -- # jq . 00:17:25.472 06:47:39 -- nvmf/common.sh@545 -- # IFS=, 00:17:25.472 06:47:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:25.472 "params": { 00:17:25.472 "name": "Nvme1", 00:17:25.472 "trtype": "tcp", 00:17:25.472 "traddr": "10.0.0.2", 00:17:25.472 "adrfam": "ipv4", 00:17:25.472 "trsvcid": "4420", 00:17:25.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:25.472 "hdgst": false, 00:17:25.472 "ddgst": false 00:17:25.472 }, 00:17:25.472 "method": "bdev_nvme_attach_controller" 00:17:25.472 }' 00:17:25.472 [2024-12-14 06:47:39.415460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:25.472 [2024-12-14 06:47:39.415570] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77611 ] 00:17:25.731 [2024-12-14 06:47:39.562964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.989 [2024-12-14 06:47:39.722324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.989 [2024-12-14 06:47:39.722494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.989 [2024-12-14 06:47:39.722495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.989 [2024-12-14 06:47:39.895395] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:25.989 [2024-12-14 06:47:39.895454] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:25.989 I/O targets: 00:17:25.989 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:25.989 00:17:25.989 00:17:25.989 CUnit - A unit testing framework for C - Version 2.1-3 00:17:25.989 http://cunit.sourceforge.net/ 00:17:25.989 00:17:25.989 00:17:25.989 Suite: bdevio tests on: Nvme1n1 00:17:25.989 Test: blockdev write read block ...passed 00:17:26.249 Test: blockdev write zeroes read block ...passed 00:17:26.249 Test: blockdev write zeroes read no split ...passed 00:17:26.249 Test: blockdev write zeroes read split ...passed 00:17:26.249 Test: blockdev write zeroes read split partial ...passed 00:17:26.249 Test: blockdev reset ...[2024-12-14 06:47:40.026116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:26.249 [2024-12-14 06:47:40.026220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef11c0 (9): Bad file descriptor 00:17:26.249 passed 00:17:26.249 Test: blockdev write read 8 blocks ...[2024-12-14 06:47:40.044493] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:26.249 passed 00:17:26.249 Test: blockdev write read size > 128k ...passed 00:17:26.249 Test: blockdev write read invalid size ...passed 00:17:26.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:26.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:26.249 Test: blockdev write read max offset ...passed 00:17:26.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:26.249 Test: blockdev writev readv 8 blocks ...passed 00:17:26.249 Test: blockdev writev readv 30 x 1block ...passed 00:17:26.249 Test: blockdev writev readv block ...passed 00:17:26.249 Test: blockdev writev readv size > 128k ...passed 00:17:26.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:26.249 Test: blockdev comparev and writev ...[2024-12-14 06:47:40.220940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.221095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.221470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.221502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.221862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.221893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.221904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.222277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.222294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:26.249 [2024-12-14 06:47:40.222309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.249 [2024-12-14 06:47:40.222319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:26.508 passed 00:17:26.508 Test: blockdev nvme passthru rw ...passed 00:17:26.508 Test: blockdev nvme passthru vendor specific ...[2024-12-14 06:47:40.306556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.508 [2024-12-14 06:47:40.306635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:26.508 [2024-12-14 06:47:40.306792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.508 [2024-12-14 06:47:40.306807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:26.508 [2024-12-14 06:47:40.306909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.508 [2024-12-14 06:47:40.306924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:26.508 [2024-12-14 06:47:40.307077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.508 [2024-12-14 06:47:40.307093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:26.508 passed 00:17:26.508 Test: blockdev nvme admin passthru ...passed 00:17:26.508 Test: blockdev copy ...passed 00:17:26.508 00:17:26.508 Run Summary: Type Total Ran Passed Failed Inactive 00:17:26.508 suites 1 1 n/a 0 0 00:17:26.508 tests 23 23 23 0 0 00:17:26.508 asserts 152 152 152 0 n/a 00:17:26.508 00:17:26.508 Elapsed time = 0.941 seconds 00:17:27.077 06:47:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.077 06:47:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.077 06:47:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.077 06:47:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.077 06:47:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:27.077 06:47:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:27.077 06:47:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:27.077 06:47:40 -- nvmf/common.sh@116 -- # sync 00:17:27.077 06:47:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:27.077 06:47:40 -- nvmf/common.sh@119 -- # set +e 00:17:27.077 06:47:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:27.077 06:47:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:27.077 rmmod nvme_tcp 00:17:27.077 rmmod nvme_fabrics 00:17:27.077 rmmod nvme_keyring 00:17:27.077 06:47:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:27.077 06:47:40 -- nvmf/common.sh@123 -- # set -e 00:17:27.077 06:47:40 -- nvmf/common.sh@124 -- # return 0 00:17:27.077 06:47:40 -- nvmf/common.sh@477 -- # '[' -n 77557 ']' 00:17:27.077 06:47:40 -- nvmf/common.sh@478 -- # killprocess 77557 00:17:27.077 06:47:40 -- common/autotest_common.sh@936 -- # '[' -z 77557 ']' 00:17:27.077 06:47:40 -- common/autotest_common.sh@940 -- # kill -0 77557 00:17:27.077 06:47:40 -- common/autotest_common.sh@941 -- # uname 00:17:27.077 06:47:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.077 06:47:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77557 00:17:27.077 06:47:40 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:27.077 06:47:40 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:27.077 killing process with pid 77557 00:17:27.077 06:47:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77557' 00:17:27.077 06:47:40 -- common/autotest_common.sh@955 -- # kill 77557 00:17:27.077 06:47:40 -- common/autotest_common.sh@960 -- # wait 77557 00:17:27.646 06:47:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:27.646 06:47:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:27.646 06:47:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:27.646 06:47:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.646 06:47:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:27.646 06:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.646 06:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.646 06:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.646 06:47:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:27.646 00:17:27.646 real 0m3.780s 00:17:27.646 user 0m13.085s 00:17:27.646 sys 0m1.395s 00:17:27.646 06:47:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:27.646 06:47:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.646 ************************************ 00:17:27.646 END TEST nvmf_bdevio_no_huge 00:17:27.646 ************************************ 00:17:27.646 06:47:41 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:27.646 06:47:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.646 06:47:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.646 06:47:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.646 ************************************ 00:17:27.646 START TEST nvmf_tls 00:17:27.646 ************************************ 00:17:27.646 06:47:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:27.646 * Looking for test storage... 00:17:27.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.646 06:47:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:27.646 06:47:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:27.646 06:47:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:27.905 06:47:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:27.905 06:47:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:27.905 06:47:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:27.905 06:47:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:27.905 06:47:41 -- scripts/common.sh@335 -- # IFS=.-: 00:17:27.905 06:47:41 -- scripts/common.sh@335 -- # read -ra ver1 00:17:27.905 06:47:41 -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.905 06:47:41 -- scripts/common.sh@336 -- # read -ra ver2 00:17:27.905 06:47:41 -- scripts/common.sh@337 -- # local 'op=<' 00:17:27.905 06:47:41 -- scripts/common.sh@339 -- # ver1_l=2 00:17:27.905 06:47:41 -- scripts/common.sh@340 -- # ver2_l=1 00:17:27.905 06:47:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:27.905 06:47:41 -- scripts/common.sh@343 -- # case "$op" in 00:17:27.905 06:47:41 -- scripts/common.sh@344 -- # : 1 00:17:27.905 06:47:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:27.905 06:47:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.905 06:47:41 -- scripts/common.sh@364 -- # decimal 1 00:17:27.905 06:47:41 -- scripts/common.sh@352 -- # local d=1 00:17:27.905 06:47:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.905 06:47:41 -- scripts/common.sh@354 -- # echo 1 00:17:27.905 06:47:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:27.905 06:47:41 -- scripts/common.sh@365 -- # decimal 2 00:17:27.905 06:47:41 -- scripts/common.sh@352 -- # local d=2 00:17:27.905 06:47:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.905 06:47:41 -- scripts/common.sh@354 -- # echo 2 00:17:27.905 06:47:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:27.905 06:47:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:27.905 06:47:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:27.905 06:47:41 -- scripts/common.sh@367 -- # return 0 00:17:27.905 06:47:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.905 06:47:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.905 --rc genhtml_branch_coverage=1 00:17:27.905 --rc genhtml_function_coverage=1 00:17:27.905 --rc genhtml_legend=1 00:17:27.905 --rc geninfo_all_blocks=1 00:17:27.905 --rc geninfo_unexecuted_blocks=1 00:17:27.905 00:17:27.905 ' 00:17:27.905 06:47:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.905 --rc genhtml_branch_coverage=1 00:17:27.905 --rc genhtml_function_coverage=1 00:17:27.905 --rc genhtml_legend=1 00:17:27.905 --rc geninfo_all_blocks=1 00:17:27.905 --rc geninfo_unexecuted_blocks=1 00:17:27.905 00:17:27.905 ' 00:17:27.905 06:47:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.905 --rc genhtml_branch_coverage=1 00:17:27.905 --rc genhtml_function_coverage=1 00:17:27.905 --rc genhtml_legend=1 00:17:27.905 --rc geninfo_all_blocks=1 00:17:27.905 --rc geninfo_unexecuted_blocks=1 00:17:27.905 00:17:27.905 ' 00:17:27.905 06:47:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.905 --rc genhtml_branch_coverage=1 00:17:27.905 --rc genhtml_function_coverage=1 00:17:27.905 --rc genhtml_legend=1 00:17:27.905 --rc geninfo_all_blocks=1 00:17:27.905 --rc geninfo_unexecuted_blocks=1 00:17:27.906 00:17:27.906 ' 00:17:27.906 06:47:41 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.906 06:47:41 -- nvmf/common.sh@7 -- # uname -s 00:17:27.906 06:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.906 06:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.906 06:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.906 06:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.906 06:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.906 06:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.906 06:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.906 06:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.906 06:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.906 06:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:27.906 06:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:17:27.906 06:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.906 06:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.906 06:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.906 06:47:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.906 06:47:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.906 06:47:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.906 06:47:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.906 06:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.906 06:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.906 06:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.906 06:47:41 -- paths/export.sh@5 -- # export PATH 00:17:27.906 06:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.906 06:47:41 -- nvmf/common.sh@46 -- # : 0 00:17:27.906 06:47:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:27.906 06:47:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:27.906 06:47:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:27.906 06:47:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.906 06:47:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.906 06:47:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:27.906 06:47:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:27.906 06:47:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:27.906 06:47:41 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.906 06:47:41 -- target/tls.sh@71 -- # nvmftestinit 00:17:27.906 06:47:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:27.906 06:47:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.906 06:47:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:27.906 06:47:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:27.906 06:47:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:27.906 06:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.906 06:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.906 06:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.906 06:47:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:27.906 06:47:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:27.906 06:47:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.906 06:47:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.906 06:47:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:27.906 06:47:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:27.906 06:47:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.906 06:47:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.906 06:47:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.906 06:47:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.906 06:47:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.906 06:47:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.906 06:47:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.906 06:47:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.906 06:47:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:27.906 06:47:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:27.906 Cannot find device "nvmf_tgt_br" 00:17:27.906 06:47:41 -- nvmf/common.sh@154 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.906 Cannot find device "nvmf_tgt_br2" 00:17:27.906 06:47:41 -- nvmf/common.sh@155 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:27.906 06:47:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:27.906 Cannot find device "nvmf_tgt_br" 00:17:27.906 06:47:41 -- nvmf/common.sh@157 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:27.906 Cannot find device "nvmf_tgt_br2" 00:17:27.906 06:47:41 -- nvmf/common.sh@158 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:27.906 06:47:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:27.906 06:47:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.906 06:47:41 -- nvmf/common.sh@161 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.906 06:47:41 -- nvmf/common.sh@162 -- # true 00:17:27.906 06:47:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.906 06:47:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.906 06:47:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.906 06:47:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.906 06:47:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.906 06:47:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.165 06:47:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.165 06:47:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.165 06:47:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.165 06:47:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:28.165 06:47:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:28.165 06:47:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:28.165 06:47:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:28.165 06:47:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.165 06:47:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.165 06:47:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.165 06:47:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:28.165 06:47:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:28.165 06:47:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.165 06:47:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.165 06:47:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.165 06:47:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.165 06:47:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.165 06:47:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:28.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:28.165 00:17:28.165 --- 10.0.0.2 ping statistics --- 00:17:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.165 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:28.165 06:47:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:28.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:28.165 00:17:28.165 --- 10.0.0.3 ping statistics --- 00:17:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.165 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:28.165 06:47:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:28.165 00:17:28.165 --- 10.0.0.1 ping statistics --- 00:17:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.165 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:28.165 06:47:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.165 06:47:41 -- nvmf/common.sh@421 -- # return 0 00:17:28.165 06:47:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:28.165 06:47:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.165 06:47:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:28.165 06:47:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:28.165 06:47:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.165 06:47:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:28.165 06:47:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:28.165 06:47:42 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:28.165 06:47:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:28.166 06:47:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.166 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 06:47:42 -- nvmf/common.sh@469 -- # nvmfpid=77804 00:17:28.166 06:47:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:28.166 06:47:42 -- nvmf/common.sh@470 -- # waitforlisten 77804 00:17:28.166 06:47:42 -- common/autotest_common.sh@829 -- # '[' -z 77804 ']' 00:17:28.166 06:47:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.166 06:47:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.166 06:47:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.166 06:47:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.166 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 [2024-12-14 06:47:42.074647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.166 [2024-12-14 06:47:42.074734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.424 [2024-12-14 06:47:42.215455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.424 [2024-12-14 06:47:42.342892] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:28.424 [2024-12-14 06:47:42.343093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.424 [2024-12-14 06:47:42.343112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.424 [2024-12-14 06:47:42.343127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.424 [2024-12-14 06:47:42.343176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.360 06:47:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.360 06:47:43 -- common/autotest_common.sh@862 -- # return 0 00:17:29.360 06:47:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:29.360 06:47:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.360 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.360 06:47:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.360 06:47:43 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:29.360 06:47:43 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:29.619 true 00:17:29.619 06:47:43 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:29.619 06:47:43 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.878 06:47:43 -- target/tls.sh@82 -- # version=0 00:17:29.878 06:47:43 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:29.878 06:47:43 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:30.137 06:47:43 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.137 06:47:43 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:30.396 06:47:44 -- target/tls.sh@90 -- # version=13 00:17:30.396 06:47:44 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:30.396 06:47:44 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:30.655 06:47:44 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.655 06:47:44 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:30.913 06:47:44 -- target/tls.sh@98 -- # version=7 00:17:30.913 06:47:44 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:30.913 06:47:44 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.914 06:47:44 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:30.914 06:47:44 -- target/tls.sh@105 -- # ktls=false 00:17:30.914 06:47:44 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:30.914 06:47:44 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:31.481 06:47:45 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.481 06:47:45 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:31.739 06:47:45 -- target/tls.sh@113 -- # ktls=true 00:17:31.739 06:47:45 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:31.739 06:47:45 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:31.998 06:47:45 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.998 06:47:45 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:32.256 06:47:46 -- target/tls.sh@121 -- # ktls=false 00:17:32.256 06:47:46 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:32.256 06:47:46 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:32.256 06:47:46 -- target/tls.sh@49 -- # local key hash crc 00:17:32.256 06:47:46 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:32.256 06:47:46 -- target/tls.sh@51 -- # hash=01 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # gzip -1 -c 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # tail -c8 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # head -c 4 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # crc='p$H�' 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.257 06:47:46 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.257 06:47:46 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:32.257 06:47:46 -- target/tls.sh@49 -- # local key hash crc 00:17:32.257 06:47:46 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:32.257 06:47:46 -- target/tls.sh@51 -- # hash=01 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # gzip -1 -c 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # tail -c8 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # head -c 4 00:17:32.257 06:47:46 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:32.257 06:47:46 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.257 06:47:46 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.257 06:47:46 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:32.257 06:47:46 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:32.257 06:47:46 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.257 06:47:46 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.257 06:47:46 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:32.257 06:47:46 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:32.257 06:47:46 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.515 06:47:46 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:33.092 06:47:46 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:33.092 06:47:46 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:33.092 06:47:46 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.092 [2024-12-14 06:47:47.002578] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.092 06:47:47 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:33.366 06:47:47 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.624 [2024-12-14 06:47:47.458664] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.624 [2024-12-14 06:47:47.458975] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.624 06:47:47 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.882 malloc0 00:17:33.882 06:47:47 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.140 06:47:48 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:34.398 06:47:48 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:46.598 Initializing NVMe Controllers 00:17:46.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.598 Initialization complete. Launching workers. 00:17:46.598 ======================================================== 00:17:46.598 Latency(us) 00:17:46.598 Device Information : IOPS MiB/s Average min max 00:17:46.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11444.88 44.71 5593.03 1636.12 8698.98 00:17:46.598 ======================================================== 00:17:46.598 Total : 11444.88 44.71 5593.03 1636.12 8698.98 00:17:46.598 00:17:46.598 06:47:58 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:46.598 06:47:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:46.598 06:47:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:46.598 06:47:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:46.598 06:47:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:46.598 06:47:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.598 06:47:58 -- target/tls.sh@28 -- # bdevperf_pid=78178 00:17:46.598 06:47:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.598 06:47:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.598 06:47:58 -- target/tls.sh@31 -- # waitforlisten 78178 /var/tmp/bdevperf.sock 00:17:46.598 06:47:58 -- common/autotest_common.sh@829 -- # '[' -z 78178 ']' 00:17:46.598 06:47:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.598 06:47:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.598 06:47:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.598 06:47:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.598 06:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:46.598 [2024-12-14 06:47:58.506803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.598 [2024-12-14 06:47:58.506915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78178 ] 00:17:46.598 [2024-12-14 06:47:58.648898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.598 [2024-12-14 06:47:58.767697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.598 06:47:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.598 06:47:59 -- common/autotest_common.sh@862 -- # return 0 00:17:46.598 06:47:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:46.598 [2024-12-14 06:47:59.615560] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.598 TLSTESTn1 00:17:46.598 06:47:59 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.598 Running I/O for 10 seconds... 00:17:56.567 00:17:56.567 Latency(us) 00:17:56.567 [2024-12-14T06:48:10.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.567 [2024-12-14T06:48:10.559Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.567 Verification LBA range: start 0x0 length 0x2000 00:17:56.567 TLSTESTn1 : 10.02 5963.27 23.29 0.00 0.00 21428.32 4438.57 19660.80 00:17:56.567 [2024-12-14T06:48:10.559Z] =================================================================================================================== 00:17:56.567 [2024-12-14T06:48:10.559Z] Total : 5963.27 23.29 0.00 0.00 21428.32 4438.57 19660.80 00:17:56.567 0 00:17:56.567 06:48:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.567 06:48:09 -- target/tls.sh@45 -- # killprocess 78178 00:17:56.567 06:48:09 -- common/autotest_common.sh@936 -- # '[' -z 78178 ']' 00:17:56.567 06:48:09 -- common/autotest_common.sh@940 -- # kill -0 78178 00:17:56.567 06:48:09 -- common/autotest_common.sh@941 -- # uname 00:17:56.567 06:48:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.567 06:48:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78178 00:17:56.567 06:48:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:56.567 06:48:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:56.567 killing process with pid 78178 00:17:56.567 06:48:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78178' 00:17:56.567 06:48:09 -- common/autotest_common.sh@955 -- # kill 78178 00:17:56.567 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.567 00:17:56.567 Latency(us) 00:17:56.567 [2024-12-14T06:48:10.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.567 [2024-12-14T06:48:10.559Z] =================================================================================================================== 00:17:56.567 [2024-12-14T06:48:10.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.567 06:48:09 -- common/autotest_common.sh@960 -- # wait 78178 00:17:56.567 06:48:10 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:56.567 06:48:10 -- common/autotest_common.sh@650 -- # local es=0 00:17:56.567 06:48:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:56.567 06:48:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:56.567 06:48:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.567 06:48:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:56.567 06:48:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.567 06:48:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:56.567 06:48:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.567 06:48:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.567 06:48:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.567 06:48:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:56.567 06:48:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.567 06:48:10 -- target/tls.sh@28 -- # bdevperf_pid=78324 00:17:56.567 06:48:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.567 06:48:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.567 06:48:10 -- target/tls.sh@31 -- # waitforlisten 78324 /var/tmp/bdevperf.sock 00:17:56.567 06:48:10 -- common/autotest_common.sh@829 -- # '[' -z 78324 ']' 00:17:56.567 06:48:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.567 06:48:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.567 06:48:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.567 06:48:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.567 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.567 [2024-12-14 06:48:10.251136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:56.567 [2024-12-14 06:48:10.251258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78324 ] 00:17:56.567 [2024-12-14 06:48:10.386227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.567 [2024-12-14 06:48:10.497651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.501 06:48:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.501 06:48:11 -- common/autotest_common.sh@862 -- # return 0 00:17:57.501 06:48:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:57.501 [2024-12-14 06:48:11.388587] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.501 [2024-12-14 06:48:11.398253] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.501 [2024-12-14 06:48:11.399159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ed3d0 (107): Transport endpoint is not connected 00:17:57.501 [2024-12-14 06:48:11.400148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ed3d0 (9): Bad file descriptor 00:17:57.501 [2024-12-14 06:48:11.401150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.501 [2024-12-14 06:48:11.401175] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.501 [2024-12-14 06:48:11.401185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.501 2024/12/14 06:48:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:57.501 request: 00:17:57.501 { 00:17:57.501 "method": "bdev_nvme_attach_controller", 00:17:57.501 "params": { 00:17:57.501 "name": "TLSTEST", 00:17:57.501 "trtype": "tcp", 00:17:57.501 "traddr": "10.0.0.2", 00:17:57.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.501 "adrfam": "ipv4", 00:17:57.501 "trsvcid": "4420", 00:17:57.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.501 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:57.501 } 00:17:57.501 } 00:17:57.501 Got JSON-RPC error response 00:17:57.501 GoRPCClient: error on JSON-RPC call 00:17:57.501 06:48:11 -- target/tls.sh@36 -- # killprocess 78324 00:17:57.501 06:48:11 -- common/autotest_common.sh@936 -- # '[' -z 78324 ']' 00:17:57.501 06:48:11 -- common/autotest_common.sh@940 -- # kill -0 78324 00:17:57.501 06:48:11 -- common/autotest_common.sh@941 -- # uname 00:17:57.501 06:48:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.501 06:48:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78324 00:17:57.501 killing process with pid 78324 00:17:57.501 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.501 00:17:57.501 Latency(us) 00:17:57.501 [2024-12-14T06:48:11.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.501 [2024-12-14T06:48:11.493Z] =================================================================================================================== 00:17:57.501 [2024-12-14T06:48:11.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.501 06:48:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.501 06:48:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.501 06:48:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78324' 00:17:57.501 06:48:11 -- common/autotest_common.sh@955 -- # kill 78324 00:17:57.501 06:48:11 -- common/autotest_common.sh@960 -- # wait 78324 00:17:58.068 06:48:11 -- target/tls.sh@37 -- # return 1 00:17:58.068 06:48:11 -- common/autotest_common.sh@653 -- # es=1 00:17:58.068 06:48:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.068 06:48:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.068 06:48:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.068 06:48:11 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:58.068 06:48:11 -- common/autotest_common.sh@650 -- # local es=0 00:17:58.068 06:48:11 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:58.068 06:48:11 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:58.068 06:48:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.068 06:48:11 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:58.068 06:48:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.068 06:48:11 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:58.068 06:48:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.068 06:48:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.068 06:48:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:58.068 06:48:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:58.068 06:48:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.068 06:48:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.068 06:48:11 -- target/tls.sh@28 -- # bdevperf_pid=78370 00:17:58.068 06:48:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.068 06:48:11 -- target/tls.sh@31 -- # waitforlisten 78370 /var/tmp/bdevperf.sock 00:17:58.068 06:48:11 -- common/autotest_common.sh@829 -- # '[' -z 78370 ']' 00:17:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.068 06:48:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.068 06:48:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.068 06:48:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.068 06:48:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.068 06:48:11 -- common/autotest_common.sh@10 -- # set +x 00:17:58.068 [2024-12-14 06:48:11.813934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:58.068 [2024-12-14 06:48:11.814024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78370 ] 00:17:58.068 [2024-12-14 06:48:11.943022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.327 [2024-12-14 06:48:12.058759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.893 06:48:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.893 06:48:12 -- common/autotest_common.sh@862 -- # return 0 00:17:58.893 06:48:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:59.151 [2024-12-14 06:48:13.054130] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.151 [2024-12-14 06:48:13.059004] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:59.151 [2024-12-14 06:48:13.059073] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:59.151 [2024-12-14 06:48:13.059134] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:59.151 [2024-12-14 06:48:13.059695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bb3d0 (107): Transport endpoint is not connected 00:17:59.151 [2024-12-14 06:48:13.060684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bb3d0 (9): Bad file descriptor 00:17:59.151 [2024-12-14 06:48:13.061680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:59.151 [2024-12-14 06:48:13.061696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:59.151 [2024-12-14 06:48:13.061706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:59.151 2024/12/14 06:48:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:59.151 request: 00:17:59.151 { 00:17:59.151 "method": "bdev_nvme_attach_controller", 00:17:59.151 "params": { 00:17:59.151 "name": "TLSTEST", 00:17:59.151 "trtype": "tcp", 00:17:59.151 "traddr": "10.0.0.2", 00:17:59.151 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:59.151 "adrfam": "ipv4", 00:17:59.151 "trsvcid": "4420", 00:17:59.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.151 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:59.151 } 00:17:59.151 } 00:17:59.151 Got JSON-RPC error response 00:17:59.151 GoRPCClient: error on JSON-RPC call 00:17:59.151 06:48:13 -- target/tls.sh@36 -- # killprocess 78370 00:17:59.151 06:48:13 -- common/autotest_common.sh@936 -- # '[' -z 78370 ']' 00:17:59.151 06:48:13 -- common/autotest_common.sh@940 -- # kill -0 78370 00:17:59.151 06:48:13 -- common/autotest_common.sh@941 -- # uname 00:17:59.151 06:48:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.151 06:48:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78370 00:17:59.151 killing process with pid 78370 00:17:59.151 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.151 00:17:59.151 Latency(us) 00:17:59.151 [2024-12-14T06:48:13.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.151 [2024-12-14T06:48:13.143Z] =================================================================================================================== 00:17:59.151 [2024-12-14T06:48:13.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.151 06:48:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:59.151 06:48:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:59.151 06:48:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78370' 00:17:59.151 06:48:13 -- common/autotest_common.sh@955 -- # kill 78370 00:17:59.151 06:48:13 -- common/autotest_common.sh@960 -- # wait 78370 00:17:59.717 06:48:13 -- target/tls.sh@37 -- # return 1 00:17:59.717 06:48:13 -- common/autotest_common.sh@653 -- # es=1 00:17:59.717 06:48:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.717 06:48:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.717 06:48:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.717 06:48:13 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:59.717 06:48:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:59.717 06:48:13 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:59.717 06:48:13 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:59.717 06:48:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.717 06:48:13 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:59.717 06:48:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.717 06:48:13 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:59.717 06:48:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.717 06:48:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:59.717 06:48:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.717 06:48:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:59.717 06:48:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.717 06:48:13 -- target/tls.sh@28 -- # bdevperf_pid=78415 00:17:59.717 06:48:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.717 06:48:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.717 06:48:13 -- target/tls.sh@31 -- # waitforlisten 78415 /var/tmp/bdevperf.sock 00:17:59.717 06:48:13 -- common/autotest_common.sh@829 -- # '[' -z 78415 ']' 00:17:59.717 06:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.717 06:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.717 06:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.717 06:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.717 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.717 [2024-12-14 06:48:13.491137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.717 [2024-12-14 06:48:13.491241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78415 ] 00:17:59.717 [2024-12-14 06:48:13.633142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.975 [2024-12-14 06:48:13.735424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.541 06:48:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.541 06:48:14 -- common/autotest_common.sh@862 -- # return 0 00:18:00.541 06:48:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:18:00.799 [2024-12-14 06:48:14.687915] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.799 [2024-12-14 06:48:14.698383] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:00.799 [2024-12-14 06:48:14.698416] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:00.799 [2024-12-14 06:48:14.698469] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:00.799 [2024-12-14 06:48:14.698602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15423d0 (107): Transport endpoint is not connected 00:18:00.799 [2024-12-14 06:48:14.699593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15423d0 (9): Bad file descriptor 00:18:00.799 [2024-12-14 06:48:14.700604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:00.799 [2024-12-14 06:48:14.700623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:00.799 [2024-12-14 06:48:14.700633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:00.799 2024/12/14 06:48:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:18:00.799 request: 00:18:00.799 { 00:18:00.799 "method": "bdev_nvme_attach_controller", 00:18:00.799 "params": { 00:18:00.799 "name": "TLSTEST", 00:18:00.799 "trtype": "tcp", 00:18:00.799 "traddr": "10.0.0.2", 00:18:00.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.799 "adrfam": "ipv4", 00:18:00.799 "trsvcid": "4420", 00:18:00.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:00.799 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:18:00.799 } 00:18:00.799 } 00:18:00.799 Got JSON-RPC error response 00:18:00.799 GoRPCClient: error on JSON-RPC call 00:18:00.799 06:48:14 -- target/tls.sh@36 -- # killprocess 78415 00:18:00.799 06:48:14 -- common/autotest_common.sh@936 -- # '[' -z 78415 ']' 00:18:00.799 06:48:14 -- common/autotest_common.sh@940 -- # kill -0 78415 00:18:00.799 06:48:14 -- common/autotest_common.sh@941 -- # uname 00:18:00.800 06:48:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.800 06:48:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78415 00:18:00.800 killing process with pid 78415 00:18:00.800 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.800 00:18:00.800 Latency(us) 00:18:00.800 [2024-12-14T06:48:14.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.800 [2024-12-14T06:48:14.792Z] =================================================================================================================== 00:18:00.800 [2024-12-14T06:48:14.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.800 06:48:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:00.800 06:48:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:00.800 06:48:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78415' 00:18:00.800 06:48:14 -- common/autotest_common.sh@955 -- # kill 78415 00:18:00.800 06:48:14 -- common/autotest_common.sh@960 -- # wait 78415 00:18:01.365 06:48:15 -- target/tls.sh@37 -- # return 1 00:18:01.365 06:48:15 -- common/autotest_common.sh@653 -- # es=1 00:18:01.365 06:48:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.365 06:48:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.365 06:48:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.365 06:48:15 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:01.365 06:48:15 -- common/autotest_common.sh@650 -- # local es=0 00:18:01.365 06:48:15 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:01.365 06:48:15 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:01.365 06:48:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.365 06:48:15 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:01.365 06:48:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.365 06:48:15 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:01.365 06:48:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.365 06:48:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.365 06:48:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.365 06:48:15 -- target/tls.sh@23 -- # psk= 00:18:01.365 06:48:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.365 06:48:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.365 06:48:15 -- target/tls.sh@28 -- # bdevperf_pid=78461 00:18:01.365 06:48:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.365 06:48:15 -- target/tls.sh@31 -- # waitforlisten 78461 /var/tmp/bdevperf.sock 00:18:01.365 06:48:15 -- common/autotest_common.sh@829 -- # '[' -z 78461 ']' 00:18:01.365 06:48:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.365 06:48:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.365 06:48:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.365 06:48:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.365 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:18:01.365 [2024-12-14 06:48:15.129033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.365 [2024-12-14 06:48:15.129341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78461 ] 00:18:01.365 [2024-12-14 06:48:15.265916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.622 [2024-12-14 06:48:15.360448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.187 06:48:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.187 06:48:16 -- common/autotest_common.sh@862 -- # return 0 00:18:02.187 06:48:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:02.446 [2024-12-14 06:48:16.358639] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.446 [2024-12-14 06:48:16.360600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d2dc0 (9): Bad file descriptor 00:18:02.446 [2024-12-14 06:48:16.361595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.446 [2024-12-14 06:48:16.361634] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.446 [2024-12-14 06:48:16.361644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.446 2024/12/14 06:48:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:18:02.446 request: 00:18:02.446 { 00:18:02.446 "method": "bdev_nvme_attach_controller", 00:18:02.446 "params": { 00:18:02.446 "name": "TLSTEST", 00:18:02.446 "trtype": "tcp", 00:18:02.446 "traddr": "10.0.0.2", 00:18:02.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.446 "adrfam": "ipv4", 00:18:02.446 "trsvcid": "4420", 00:18:02.446 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:18:02.446 } 00:18:02.446 } 00:18:02.446 Got JSON-RPC error response 00:18:02.446 GoRPCClient: error on JSON-RPC call 00:18:02.446 06:48:16 -- target/tls.sh@36 -- # killprocess 78461 00:18:02.446 06:48:16 -- common/autotest_common.sh@936 -- # '[' -z 78461 ']' 00:18:02.446 06:48:16 -- common/autotest_common.sh@940 -- # kill -0 78461 00:18:02.446 06:48:16 -- common/autotest_common.sh@941 -- # uname 00:18:02.446 06:48:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.446 06:48:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78461 00:18:02.446 06:48:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:02.446 06:48:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:02.446 killing process with pid 78461 00:18:02.446 06:48:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78461' 00:18:02.446 06:48:16 -- common/autotest_common.sh@955 -- # kill 78461 00:18:02.446 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.446 00:18:02.446 Latency(us) 00:18:02.446 [2024-12-14T06:48:16.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.446 [2024-12-14T06:48:16.438Z] =================================================================================================================== 00:18:02.446 [2024-12-14T06:48:16.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.446 06:48:16 -- common/autotest_common.sh@960 -- # wait 78461 00:18:03.012 06:48:16 -- target/tls.sh@37 -- # return 1 00:18:03.012 06:48:16 -- common/autotest_common.sh@653 -- # es=1 00:18:03.012 06:48:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.012 06:48:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.012 06:48:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.012 06:48:16 -- target/tls.sh@167 -- # killprocess 77804 00:18:03.012 06:48:16 -- common/autotest_common.sh@936 -- # '[' -z 77804 ']' 00:18:03.012 06:48:16 -- common/autotest_common.sh@940 -- # kill -0 77804 00:18:03.012 06:48:16 -- common/autotest_common.sh@941 -- # uname 00:18:03.012 06:48:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.012 06:48:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77804 00:18:03.012 06:48:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:03.012 06:48:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:03.012 killing process with pid 77804 00:18:03.012 06:48:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77804' 00:18:03.012 06:48:16 -- common/autotest_common.sh@955 -- # kill 77804 00:18:03.013 06:48:16 -- common/autotest_common.sh@960 -- # wait 77804 00:18:03.270 06:48:17 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:18:03.270 06:48:17 -- target/tls.sh@49 -- # local key hash crc 00:18:03.270 06:48:17 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:03.270 06:48:17 -- target/tls.sh@51 -- # hash=02 00:18:03.270 06:48:17 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:18:03.270 06:48:17 -- target/tls.sh@52 -- # tail -c8 00:18:03.270 06:48:17 -- target/tls.sh@52 -- # gzip -1 -c 00:18:03.270 06:48:17 -- target/tls.sh@52 -- # head -c 4 00:18:03.270 06:48:17 -- target/tls.sh@52 -- # crc='�e�'\''' 00:18:03.270 06:48:17 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:18:03.270 06:48:17 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:03.270 06:48:17 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:03.271 06:48:17 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:03.271 06:48:17 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.271 06:48:17 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:03.271 06:48:17 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.271 06:48:17 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:18:03.271 06:48:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.271 06:48:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.271 06:48:17 -- common/autotest_common.sh@10 -- # set +x 00:18:03.271 06:48:17 -- nvmf/common.sh@469 -- # nvmfpid=78527 00:18:03.271 06:48:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.271 06:48:17 -- nvmf/common.sh@470 -- # waitforlisten 78527 00:18:03.271 06:48:17 -- common/autotest_common.sh@829 -- # '[' -z 78527 ']' 00:18:03.271 06:48:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.271 06:48:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.271 06:48:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.271 06:48:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.271 06:48:17 -- common/autotest_common.sh@10 -- # set +x 00:18:03.271 [2024-12-14 06:48:17.174664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:03.271 [2024-12-14 06:48:17.174789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.529 [2024-12-14 06:48:17.309577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.529 [2024-12-14 06:48:17.393156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.529 [2024-12-14 06:48:17.393304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.529 [2024-12-14 06:48:17.393317] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.529 [2024-12-14 06:48:17.393326] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.529 [2024-12-14 06:48:17.393363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.095 06:48:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.095 06:48:18 -- common/autotest_common.sh@862 -- # return 0 00:18:04.095 06:48:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.095 06:48:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.095 06:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:04.353 06:48:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.353 06:48:18 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:04.353 06:48:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:04.353 06:48:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.611 [2024-12-14 06:48:18.358718] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.611 06:48:18 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:04.611 06:48:18 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:04.869 [2024-12-14 06:48:18.850800] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.869 [2024-12-14 06:48:18.851097] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.127 06:48:18 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.385 malloc0 00:18:05.385 06:48:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.385 06:48:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.643 06:48:19 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.643 06:48:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.643 06:48:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:05.643 06:48:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.643 06:48:19 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:18:05.643 06:48:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.901 06:48:19 -- target/tls.sh@28 -- # bdevperf_pid=78624 00:18:05.901 06:48:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.901 06:48:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.901 06:48:19 -- target/tls.sh@31 -- # waitforlisten 78624 /var/tmp/bdevperf.sock 00:18:05.901 06:48:19 -- common/autotest_common.sh@829 -- # '[' -z 78624 ']' 00:18:05.901 06:48:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.901 06:48:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.901 06:48:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.901 06:48:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.901 06:48:19 -- common/autotest_common.sh@10 -- # set +x 00:18:05.901 [2024-12-14 06:48:19.685810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.901 [2024-12-14 06:48:19.685905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78624 ] 00:18:05.901 [2024-12-14 06:48:19.821688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.159 [2024-12-14 06:48:19.936699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.725 06:48:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.725 06:48:20 -- common/autotest_common.sh@862 -- # return 0 00:18:06.725 06:48:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:06.983 [2024-12-14 06:48:20.759636] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.983 TLSTESTn1 00:18:06.983 06:48:20 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.983 Running I/O for 10 seconds... 00:18:19.188 00:18:19.188 Latency(us) 00:18:19.188 [2024-12-14T06:48:33.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.188 [2024-12-14T06:48:33.180Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.188 Verification LBA range: start 0x0 length 0x2000 00:18:19.188 TLSTESTn1 : 10.02 5951.26 23.25 0.00 0.00 21472.57 4974.78 19422.49 00:18:19.188 [2024-12-14T06:48:33.180Z] =================================================================================================================== 00:18:19.188 [2024-12-14T06:48:33.180Z] Total : 5951.26 23.25 0.00 0.00 21472.57 4974.78 19422.49 00:18:19.188 0 00:18:19.188 06:48:31 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.188 06:48:31 -- target/tls.sh@45 -- # killprocess 78624 00:18:19.188 06:48:31 -- common/autotest_common.sh@936 -- # '[' -z 78624 ']' 00:18:19.188 06:48:31 -- common/autotest_common.sh@940 -- # kill -0 78624 00:18:19.188 06:48:31 -- common/autotest_common.sh@941 -- # uname 00:18:19.188 06:48:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.188 06:48:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78624 00:18:19.188 06:48:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:19.188 06:48:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:19.188 killing process with pid 78624 00:18:19.188 06:48:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78624' 00:18:19.188 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.188 00:18:19.188 Latency(us) 00:18:19.188 [2024-12-14T06:48:33.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.188 [2024-12-14T06:48:33.180Z] =================================================================================================================== 00:18:19.188 [2024-12-14T06:48:33.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.188 06:48:31 -- common/autotest_common.sh@955 -- # kill 78624 00:18:19.188 06:48:31 -- common/autotest_common.sh@960 -- # wait 78624 00:18:19.188 06:48:31 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.188 06:48:31 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.188 06:48:31 -- common/autotest_common.sh@650 -- # local es=0 00:18:19.188 06:48:31 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.188 06:48:31 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:19.188 06:48:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.188 06:48:31 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:19.188 06:48:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.188 06:48:31 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.188 06:48:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.188 06:48:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.188 06:48:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.188 06:48:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:18:19.188 06:48:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.188 06:48:31 -- target/tls.sh@28 -- # bdevperf_pid=78777 00:18:19.188 06:48:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.188 06:48:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.188 06:48:31 -- target/tls.sh@31 -- # waitforlisten 78777 /var/tmp/bdevperf.sock 00:18:19.188 06:48:31 -- common/autotest_common.sh@829 -- # '[' -z 78777 ']' 00:18:19.188 06:48:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.188 06:48:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.188 06:48:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.188 06:48:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.188 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:19.188 [2024-12-14 06:48:31.404995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.188 [2024-12-14 06:48:31.405242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78777 ] 00:18:19.189 [2024-12-14 06:48:31.534715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.189 [2024-12-14 06:48:31.634858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.189 06:48:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.189 06:48:32 -- common/autotest_common.sh@862 -- # return 0 00:18:19.189 06:48:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.189 [2024-12-14 06:48:32.646880] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.189 [2024-12-14 06:48:32.646927] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:19.189 2024/12/14 06:48:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.189 request: 00:18:19.189 { 00:18:19.189 "method": "bdev_nvme_attach_controller", 00:18:19.189 "params": { 00:18:19.189 "name": "TLSTEST", 00:18:19.189 "trtype": "tcp", 00:18:19.189 "traddr": "10.0.0.2", 00:18:19.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.189 "adrfam": "ipv4", 00:18:19.189 "trsvcid": "4420", 00:18:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.189 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:19.189 } 00:18:19.189 } 00:18:19.189 Got JSON-RPC error response 00:18:19.189 GoRPCClient: error on JSON-RPC call 00:18:19.189 06:48:32 -- target/tls.sh@36 -- # killprocess 78777 00:18:19.189 06:48:32 -- common/autotest_common.sh@936 -- # '[' -z 78777 ']' 00:18:19.189 06:48:32 -- common/autotest_common.sh@940 -- # kill -0 78777 00:18:19.189 06:48:32 -- common/autotest_common.sh@941 -- # uname 00:18:19.189 06:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.189 06:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78777 00:18:19.189 06:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:19.189 killing process with pid 78777 00:18:19.189 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.189 00:18:19.189 Latency(us) 00:18:19.189 [2024-12-14T06:48:33.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.189 [2024-12-14T06:48:33.181Z] =================================================================================================================== 00:18:19.189 [2024-12-14T06:48:33.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.189 06:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:19.189 06:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78777' 00:18:19.189 06:48:32 -- common/autotest_common.sh@955 -- # kill 78777 00:18:19.189 06:48:32 -- common/autotest_common.sh@960 -- # wait 78777 00:18:19.189 06:48:33 -- target/tls.sh@37 -- # return 1 00:18:19.189 06:48:33 -- common/autotest_common.sh@653 -- # es=1 00:18:19.189 06:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.189 06:48:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.189 06:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.189 06:48:33 -- target/tls.sh@183 -- # killprocess 78527 00:18:19.189 06:48:33 -- common/autotest_common.sh@936 -- # '[' -z 78527 ']' 00:18:19.189 06:48:33 -- common/autotest_common.sh@940 -- # kill -0 78527 00:18:19.189 06:48:33 -- common/autotest_common.sh@941 -- # uname 00:18:19.189 06:48:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.189 06:48:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78527 00:18:19.189 06:48:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.189 killing process with pid 78527 00:18:19.189 06:48:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.189 06:48:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78527' 00:18:19.189 06:48:33 -- common/autotest_common.sh@955 -- # kill 78527 00:18:19.189 06:48:33 -- common/autotest_common.sh@960 -- # wait 78527 00:18:19.448 06:48:33 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:19.448 06:48:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.448 06:48:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.448 06:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.448 06:48:33 -- nvmf/common.sh@469 -- # nvmfpid=78833 00:18:19.448 06:48:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.448 06:48:33 -- nvmf/common.sh@470 -- # waitforlisten 78833 00:18:19.448 06:48:33 -- common/autotest_common.sh@829 -- # '[' -z 78833 ']' 00:18:19.448 06:48:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.448 06:48:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.448 06:48:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.448 06:48:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.448 06:48:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.448 [2024-12-14 06:48:33.426013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.448 [2024-12-14 06:48:33.426089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.706 [2024-12-14 06:48:33.551158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.706 [2024-12-14 06:48:33.651230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.706 [2024-12-14 06:48:33.651371] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.706 [2024-12-14 06:48:33.651382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.706 [2024-12-14 06:48:33.651391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.706 [2024-12-14 06:48:33.651420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.642 06:48:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.642 06:48:34 -- common/autotest_common.sh@862 -- # return 0 00:18:20.642 06:48:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.642 06:48:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.642 06:48:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.642 06:48:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.642 06:48:34 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:20.642 06:48:34 -- common/autotest_common.sh@650 -- # local es=0 00:18:20.642 06:48:34 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:20.642 06:48:34 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:20.642 06:48:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.642 06:48:34 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:20.642 06:48:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.642 06:48:34 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:20.642 06:48:34 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:20.642 06:48:34 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.642 [2024-12-14 06:48:34.603727] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.642 06:48:34 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.901 06:48:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.160 [2024-12-14 06:48:34.995810] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.160 [2024-12-14 06:48:34.996111] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.160 06:48:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.418 malloc0 00:18:21.418 06:48:35 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.676 06:48:35 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:21.676 [2024-12-14 06:48:35.618747] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:21.676 [2024-12-14 06:48:35.618811] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:21.676 [2024-12-14 06:48:35.618829] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:21.676 2024/12/14 06:48:35 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:21.676 request: 00:18:21.676 { 00:18:21.676 "method": "nvmf_subsystem_add_host", 00:18:21.676 "params": { 00:18:21.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.676 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.676 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:21.676 } 00:18:21.676 } 00:18:21.676 Got JSON-RPC error response 00:18:21.676 GoRPCClient: error on JSON-RPC call 00:18:21.676 06:48:35 -- common/autotest_common.sh@653 -- # es=1 00:18:21.676 06:48:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.676 06:48:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.676 06:48:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.676 06:48:35 -- target/tls.sh@189 -- # killprocess 78833 00:18:21.676 06:48:35 -- common/autotest_common.sh@936 -- # '[' -z 78833 ']' 00:18:21.676 06:48:35 -- common/autotest_common.sh@940 -- # kill -0 78833 00:18:21.676 06:48:35 -- common/autotest_common.sh@941 -- # uname 00:18:21.677 06:48:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.677 06:48:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78833 00:18:21.935 06:48:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.935 killing process with pid 78833 00:18:21.935 06:48:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.935 06:48:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78833' 00:18:21.935 06:48:35 -- common/autotest_common.sh@955 -- # kill 78833 00:18:21.935 06:48:35 -- common/autotest_common.sh@960 -- # wait 78833 00:18:22.194 06:48:35 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:22.194 06:48:35 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:22.194 06:48:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:22.194 06:48:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.194 06:48:35 -- common/autotest_common.sh@10 -- # set +x 00:18:22.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.194 06:48:35 -- nvmf/common.sh@469 -- # nvmfpid=78938 00:18:22.194 06:48:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.194 06:48:35 -- nvmf/common.sh@470 -- # waitforlisten 78938 00:18:22.194 06:48:35 -- common/autotest_common.sh@829 -- # '[' -z 78938 ']' 00:18:22.194 06:48:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.194 06:48:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.194 06:48:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.194 06:48:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.194 06:48:35 -- common/autotest_common.sh@10 -- # set +x 00:18:22.194 [2024-12-14 06:48:36.056524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:22.194 [2024-12-14 06:48:36.056818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.453 [2024-12-14 06:48:36.193244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.453 [2024-12-14 06:48:36.277991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:22.453 [2024-12-14 06:48:36.278560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.453 [2024-12-14 06:48:36.278610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.453 [2024-12-14 06:48:36.278725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.453 [2024-12-14 06:48:36.278784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.389 06:48:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.389 06:48:37 -- common/autotest_common.sh@862 -- # return 0 00:18:23.389 06:48:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:23.389 06:48:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.389 06:48:37 -- common/autotest_common.sh@10 -- # set +x 00:18:23.389 06:48:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.389 06:48:37 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:23.389 06:48:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:23.389 06:48:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.389 [2024-12-14 06:48:37.268645] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.389 06:48:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.647 06:48:37 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.906 [2024-12-14 06:48:37.732716] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.906 [2024-12-14 06:48:37.733193] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.906 06:48:37 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.165 malloc0 00:18:24.165 06:48:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.423 06:48:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:24.682 06:48:38 -- target/tls.sh@197 -- # bdevperf_pid=79041 00:18:24.682 06:48:38 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.682 06:48:38 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.682 06:48:38 -- target/tls.sh@200 -- # waitforlisten 79041 /var/tmp/bdevperf.sock 00:18:24.682 06:48:38 -- common/autotest_common.sh@829 -- # '[' -z 79041 ']' 00:18:24.682 06:48:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.682 06:48:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.682 06:48:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.682 06:48:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.682 06:48:38 -- common/autotest_common.sh@10 -- # set +x 00:18:24.682 [2024-12-14 06:48:38.546115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:24.682 [2024-12-14 06:48:38.546254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79041 ] 00:18:24.952 [2024-12-14 06:48:38.687066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.952 [2024-12-14 06:48:38.792086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.538 06:48:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.538 06:48:39 -- common/autotest_common.sh@862 -- # return 0 00:18:25.538 06:48:39 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:25.797 [2024-12-14 06:48:39.669651] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.797 TLSTESTn1 00:18:25.797 06:48:39 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:26.365 06:48:40 -- target/tls.sh@205 -- # tgtconf='{ 00:18:26.365 "subsystems": [ 00:18:26.365 { 00:18:26.365 "subsystem": "iobuf", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "iobuf_set_options", 00:18:26.365 "params": { 00:18:26.365 "large_bufsize": 135168, 00:18:26.365 "large_pool_count": 1024, 00:18:26.365 "small_bufsize": 8192, 00:18:26.365 "small_pool_count": 8192 00:18:26.365 } 00:18:26.365 } 00:18:26.365 ] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "sock", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "sock_impl_set_options", 00:18:26.365 "params": { 00:18:26.365 "enable_ktls": false, 00:18:26.365 "enable_placement_id": 0, 00:18:26.365 "enable_quickack": false, 00:18:26.365 "enable_recv_pipe": true, 00:18:26.365 "enable_zerocopy_send_client": false, 00:18:26.365 "enable_zerocopy_send_server": true, 00:18:26.365 "impl_name": "posix", 00:18:26.365 "recv_buf_size": 2097152, 00:18:26.365 "send_buf_size": 2097152, 00:18:26.365 "tls_version": 0, 00:18:26.365 "zerocopy_threshold": 0 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "sock_impl_set_options", 00:18:26.365 "params": { 00:18:26.365 "enable_ktls": false, 00:18:26.365 "enable_placement_id": 0, 00:18:26.365 "enable_quickack": false, 00:18:26.365 "enable_recv_pipe": true, 00:18:26.365 "enable_zerocopy_send_client": false, 00:18:26.365 "enable_zerocopy_send_server": true, 00:18:26.365 "impl_name": "ssl", 00:18:26.365 "recv_buf_size": 4096, 00:18:26.365 "send_buf_size": 4096, 00:18:26.365 "tls_version": 0, 00:18:26.365 "zerocopy_threshold": 0 00:18:26.365 } 00:18:26.365 } 00:18:26.365 ] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "vmd", 00:18:26.365 "config": [] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "accel", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "accel_set_options", 00:18:26.365 "params": { 00:18:26.365 "buf_count": 2048, 00:18:26.365 "large_cache_size": 16, 00:18:26.365 "sequence_count": 2048, 00:18:26.365 "small_cache_size": 128, 00:18:26.365 "task_count": 2048 00:18:26.365 } 00:18:26.365 } 00:18:26.365 ] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "bdev", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "bdev_set_options", 00:18:26.365 "params": { 00:18:26.365 "bdev_auto_examine": true, 00:18:26.365 "bdev_io_cache_size": 256, 00:18:26.365 "bdev_io_pool_size": 65535, 00:18:26.365 "iobuf_large_cache_size": 16, 00:18:26.365 "iobuf_small_cache_size": 128 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_raid_set_options", 00:18:26.365 "params": { 00:18:26.365 "process_window_size_kb": 1024 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_iscsi_set_options", 00:18:26.365 "params": { 00:18:26.365 "timeout_sec": 30 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_nvme_set_options", 00:18:26.365 "params": { 00:18:26.365 "action_on_timeout": "none", 00:18:26.365 "allow_accel_sequence": false, 00:18:26.365 "arbitration_burst": 0, 00:18:26.365 "bdev_retry_count": 3, 00:18:26.365 "ctrlr_loss_timeout_sec": 0, 00:18:26.365 "delay_cmd_submit": true, 00:18:26.365 "fast_io_fail_timeout_sec": 0, 00:18:26.365 "generate_uuids": false, 00:18:26.365 "high_priority_weight": 0, 00:18:26.365 "io_path_stat": false, 00:18:26.365 "io_queue_requests": 0, 00:18:26.365 "keep_alive_timeout_ms": 10000, 00:18:26.365 "low_priority_weight": 0, 00:18:26.365 "medium_priority_weight": 0, 00:18:26.365 "nvme_adminq_poll_period_us": 10000, 00:18:26.365 "nvme_ioq_poll_period_us": 0, 00:18:26.365 "reconnect_delay_sec": 0, 00:18:26.365 "timeout_admin_us": 0, 00:18:26.365 "timeout_us": 0, 00:18:26.365 "transport_ack_timeout": 0, 00:18:26.365 "transport_retry_count": 4, 00:18:26.365 "transport_tos": 0 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_nvme_set_hotplug", 00:18:26.365 "params": { 00:18:26.365 "enable": false, 00:18:26.365 "period_us": 100000 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_malloc_create", 00:18:26.365 "params": { 00:18:26.365 "block_size": 4096, 00:18:26.365 "name": "malloc0", 00:18:26.365 "num_blocks": 8192, 00:18:26.365 "optimal_io_boundary": 0, 00:18:26.365 "physical_block_size": 4096, 00:18:26.365 "uuid": "7ed3772c-d9ae-48d2-8ec1-133190bc7f30" 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "bdev_wait_for_examine" 00:18:26.365 } 00:18:26.365 ] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "nbd", 00:18:26.365 "config": [] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "scheduler", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "framework_set_scheduler", 00:18:26.365 "params": { 00:18:26.365 "name": "static" 00:18:26.365 } 00:18:26.365 } 00:18:26.365 ] 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "subsystem": "nvmf", 00:18:26.365 "config": [ 00:18:26.365 { 00:18:26.365 "method": "nvmf_set_config", 00:18:26.365 "params": { 00:18:26.365 "admin_cmd_passthru": { 00:18:26.365 "identify_ctrlr": false 00:18:26.365 }, 00:18:26.365 "discovery_filter": "match_any" 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "nvmf_set_max_subsystems", 00:18:26.365 "params": { 00:18:26.365 "max_subsystems": 1024 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.365 "method": "nvmf_set_crdt", 00:18:26.365 "params": { 00:18:26.365 "crdt1": 0, 00:18:26.365 "crdt2": 0, 00:18:26.365 "crdt3": 0 00:18:26.365 } 00:18:26.365 }, 00:18:26.365 { 00:18:26.366 "method": "nvmf_create_transport", 00:18:26.366 "params": { 00:18:26.366 "abort_timeout_sec": 1, 00:18:26.366 "buf_cache_size": 4294967295, 00:18:26.366 "c2h_success": false, 00:18:26.366 "dif_insert_or_strip": false, 00:18:26.366 "in_capsule_data_size": 4096, 00:18:26.366 "io_unit_size": 131072, 00:18:26.366 "max_aq_depth": 128, 00:18:26.366 "max_io_qpairs_per_ctrlr": 127, 00:18:26.366 "max_io_size": 131072, 00:18:26.366 "max_queue_depth": 128, 00:18:26.366 "num_shared_buffers": 511, 00:18:26.366 "sock_priority": 0, 00:18:26.366 "trtype": "TCP", 00:18:26.366 "zcopy": false 00:18:26.366 } 00:18:26.366 }, 00:18:26.366 { 00:18:26.366 "method": "nvmf_create_subsystem", 00:18:26.366 "params": { 00:18:26.366 "allow_any_host": false, 00:18:26.366 "ana_reporting": false, 00:18:26.366 "max_cntlid": 65519, 00:18:26.366 "max_namespaces": 10, 00:18:26.366 "min_cntlid": 1, 00:18:26.366 "model_number": "SPDK bdev Controller", 00:18:26.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.366 "serial_number": "SPDK00000000000001" 00:18:26.366 } 00:18:26.366 }, 00:18:26.366 { 00:18:26.366 "method": "nvmf_subsystem_add_host", 00:18:26.366 "params": { 00:18:26.366 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.366 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:26.366 } 00:18:26.366 }, 00:18:26.366 { 00:18:26.366 "method": "nvmf_subsystem_add_ns", 00:18:26.366 "params": { 00:18:26.366 "namespace": { 00:18:26.366 "bdev_name": "malloc0", 00:18:26.366 "nguid": "7ED3772CD9AE48D28EC1133190BC7F30", 00:18:26.366 "nsid": 1, 00:18:26.366 "uuid": "7ed3772c-d9ae-48d2-8ec1-133190bc7f30" 00:18:26.366 }, 00:18:26.366 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:26.366 } 00:18:26.366 }, 00:18:26.366 { 00:18:26.366 "method": "nvmf_subsystem_add_listener", 00:18:26.366 "params": { 00:18:26.366 "listen_address": { 00:18:26.366 "adrfam": "IPv4", 00:18:26.366 "traddr": "10.0.0.2", 00:18:26.366 "trsvcid": "4420", 00:18:26.366 "trtype": "TCP" 00:18:26.366 }, 00:18:26.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.366 "secure_channel": true 00:18:26.366 } 00:18:26.366 } 00:18:26.366 ] 00:18:26.366 } 00:18:26.366 ] 00:18:26.366 }' 00:18:26.366 06:48:40 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:26.625 06:48:40 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:26.625 "subsystems": [ 00:18:26.625 { 00:18:26.625 "subsystem": "iobuf", 00:18:26.625 "config": [ 00:18:26.625 { 00:18:26.625 "method": "iobuf_set_options", 00:18:26.625 "params": { 00:18:26.625 "large_bufsize": 135168, 00:18:26.625 "large_pool_count": 1024, 00:18:26.625 "small_bufsize": 8192, 00:18:26.625 "small_pool_count": 8192 00:18:26.625 } 00:18:26.625 } 00:18:26.625 ] 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "subsystem": "sock", 00:18:26.625 "config": [ 00:18:26.625 { 00:18:26.625 "method": "sock_impl_set_options", 00:18:26.625 "params": { 00:18:26.625 "enable_ktls": false, 00:18:26.625 "enable_placement_id": 0, 00:18:26.625 "enable_quickack": false, 00:18:26.625 "enable_recv_pipe": true, 00:18:26.625 "enable_zerocopy_send_client": false, 00:18:26.625 "enable_zerocopy_send_server": true, 00:18:26.625 "impl_name": "posix", 00:18:26.625 "recv_buf_size": 2097152, 00:18:26.625 "send_buf_size": 2097152, 00:18:26.625 "tls_version": 0, 00:18:26.625 "zerocopy_threshold": 0 00:18:26.625 } 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "method": "sock_impl_set_options", 00:18:26.625 "params": { 00:18:26.625 "enable_ktls": false, 00:18:26.625 "enable_placement_id": 0, 00:18:26.625 "enable_quickack": false, 00:18:26.625 "enable_recv_pipe": true, 00:18:26.625 "enable_zerocopy_send_client": false, 00:18:26.625 "enable_zerocopy_send_server": true, 00:18:26.625 "impl_name": "ssl", 00:18:26.625 "recv_buf_size": 4096, 00:18:26.625 "send_buf_size": 4096, 00:18:26.625 "tls_version": 0, 00:18:26.625 "zerocopy_threshold": 0 00:18:26.625 } 00:18:26.625 } 00:18:26.625 ] 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "subsystem": "vmd", 00:18:26.625 "config": [] 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "subsystem": "accel", 00:18:26.625 "config": [ 00:18:26.625 { 00:18:26.625 "method": "accel_set_options", 00:18:26.625 "params": { 00:18:26.625 "buf_count": 2048, 00:18:26.625 "large_cache_size": 16, 00:18:26.625 "sequence_count": 2048, 00:18:26.625 "small_cache_size": 128, 00:18:26.625 "task_count": 2048 00:18:26.625 } 00:18:26.625 } 00:18:26.625 ] 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "subsystem": "bdev", 00:18:26.625 "config": [ 00:18:26.625 { 00:18:26.625 "method": "bdev_set_options", 00:18:26.625 "params": { 00:18:26.625 "bdev_auto_examine": true, 00:18:26.625 "bdev_io_cache_size": 256, 00:18:26.625 "bdev_io_pool_size": 65535, 00:18:26.625 "iobuf_large_cache_size": 16, 00:18:26.625 "iobuf_small_cache_size": 128 00:18:26.625 } 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "method": "bdev_raid_set_options", 00:18:26.625 "params": { 00:18:26.625 "process_window_size_kb": 1024 00:18:26.625 } 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "method": "bdev_iscsi_set_options", 00:18:26.625 "params": { 00:18:26.625 "timeout_sec": 30 00:18:26.625 } 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "method": "bdev_nvme_set_options", 00:18:26.625 "params": { 00:18:26.625 "action_on_timeout": "none", 00:18:26.625 "allow_accel_sequence": false, 00:18:26.625 "arbitration_burst": 0, 00:18:26.625 "bdev_retry_count": 3, 00:18:26.625 "ctrlr_loss_timeout_sec": 0, 00:18:26.625 "delay_cmd_submit": true, 00:18:26.625 "fast_io_fail_timeout_sec": 0, 00:18:26.625 "generate_uuids": false, 00:18:26.625 "high_priority_weight": 0, 00:18:26.625 "io_path_stat": false, 00:18:26.625 "io_queue_requests": 512, 00:18:26.625 "keep_alive_timeout_ms": 10000, 00:18:26.625 "low_priority_weight": 0, 00:18:26.625 "medium_priority_weight": 0, 00:18:26.625 "nvme_adminq_poll_period_us": 10000, 00:18:26.625 "nvme_ioq_poll_period_us": 0, 00:18:26.625 "reconnect_delay_sec": 0, 00:18:26.625 "timeout_admin_us": 0, 00:18:26.625 "timeout_us": 0, 00:18:26.625 "transport_ack_timeout": 0, 00:18:26.625 "transport_retry_count": 4, 00:18:26.625 "transport_tos": 0 00:18:26.625 } 00:18:26.625 }, 00:18:26.625 { 00:18:26.625 "method": "bdev_nvme_attach_controller", 00:18:26.625 "params": { 00:18:26.625 "adrfam": "IPv4", 00:18:26.625 "ctrlr_loss_timeout_sec": 0, 00:18:26.625 "ddgst": false, 00:18:26.625 "fast_io_fail_timeout_sec": 0, 00:18:26.625 "hdgst": false, 00:18:26.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.625 "name": "TLSTEST", 00:18:26.625 "prchk_guard": false, 00:18:26.625 "prchk_reftag": false, 00:18:26.626 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:26.626 "reconnect_delay_sec": 0, 00:18:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.626 "traddr": "10.0.0.2", 00:18:26.626 "trsvcid": "4420", 00:18:26.626 "trtype": "TCP" 00:18:26.626 } 00:18:26.626 }, 00:18:26.626 { 00:18:26.626 "method": "bdev_nvme_set_hotplug", 00:18:26.626 "params": { 00:18:26.626 "enable": false, 00:18:26.626 "period_us": 100000 00:18:26.626 } 00:18:26.626 }, 00:18:26.626 { 00:18:26.626 "method": "bdev_wait_for_examine" 00:18:26.626 } 00:18:26.626 ] 00:18:26.626 }, 00:18:26.626 { 00:18:26.626 "subsystem": "nbd", 00:18:26.626 "config": [] 00:18:26.626 } 00:18:26.626 ] 00:18:26.626 }' 00:18:26.626 06:48:40 -- target/tls.sh@208 -- # killprocess 79041 00:18:26.626 06:48:40 -- common/autotest_common.sh@936 -- # '[' -z 79041 ']' 00:18:26.626 06:48:40 -- common/autotest_common.sh@940 -- # kill -0 79041 00:18:26.626 06:48:40 -- common/autotest_common.sh@941 -- # uname 00:18:26.626 06:48:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.626 06:48:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79041 00:18:26.626 06:48:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:26.626 06:48:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:26.626 killing process with pid 79041 00:18:26.626 06:48:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79041' 00:18:26.626 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.626 00:18:26.626 Latency(us) 00:18:26.626 [2024-12-14T06:48:40.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.626 [2024-12-14T06:48:40.618Z] =================================================================================================================== 00:18:26.626 [2024-12-14T06:48:40.618Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.626 06:48:40 -- common/autotest_common.sh@955 -- # kill 79041 00:18:26.626 06:48:40 -- common/autotest_common.sh@960 -- # wait 79041 00:18:26.884 06:48:40 -- target/tls.sh@209 -- # killprocess 78938 00:18:26.884 06:48:40 -- common/autotest_common.sh@936 -- # '[' -z 78938 ']' 00:18:26.884 06:48:40 -- common/autotest_common.sh@940 -- # kill -0 78938 00:18:26.884 06:48:40 -- common/autotest_common.sh@941 -- # uname 00:18:26.884 06:48:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.884 06:48:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78938 00:18:26.884 06:48:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:26.885 06:48:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:26.885 killing process with pid 78938 00:18:26.885 06:48:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78938' 00:18:26.885 06:48:40 -- common/autotest_common.sh@955 -- # kill 78938 00:18:26.885 06:48:40 -- common/autotest_common.sh@960 -- # wait 78938 00:18:27.144 06:48:41 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:27.144 06:48:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.144 06:48:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.144 06:48:41 -- common/autotest_common.sh@10 -- # set +x 00:18:27.144 06:48:41 -- target/tls.sh@212 -- # echo '{ 00:18:27.144 "subsystems": [ 00:18:27.144 { 00:18:27.144 "subsystem": "iobuf", 00:18:27.144 "config": [ 00:18:27.144 { 00:18:27.144 "method": "iobuf_set_options", 00:18:27.144 "params": { 00:18:27.144 "large_bufsize": 135168, 00:18:27.144 "large_pool_count": 1024, 00:18:27.144 "small_bufsize": 8192, 00:18:27.144 "small_pool_count": 8192 00:18:27.144 } 00:18:27.144 } 00:18:27.144 ] 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "subsystem": "sock", 00:18:27.144 "config": [ 00:18:27.144 { 00:18:27.144 "method": "sock_impl_set_options", 00:18:27.144 "params": { 00:18:27.144 "enable_ktls": false, 00:18:27.144 "enable_placement_id": 0, 00:18:27.144 "enable_quickack": false, 00:18:27.144 "enable_recv_pipe": true, 00:18:27.144 "enable_zerocopy_send_client": false, 00:18:27.144 "enable_zerocopy_send_server": true, 00:18:27.144 "impl_name": "posix", 00:18:27.144 "recv_buf_size": 2097152, 00:18:27.144 "send_buf_size": 2097152, 00:18:27.144 "tls_version": 0, 00:18:27.144 "zerocopy_threshold": 0 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "sock_impl_set_options", 00:18:27.144 "params": { 00:18:27.144 "enable_ktls": false, 00:18:27.144 "enable_placement_id": 0, 00:18:27.144 "enable_quickack": false, 00:18:27.144 "enable_recv_pipe": true, 00:18:27.144 "enable_zerocopy_send_client": false, 00:18:27.144 "enable_zerocopy_send_server": true, 00:18:27.144 "impl_name": "ssl", 00:18:27.144 "recv_buf_size": 4096, 00:18:27.144 "send_buf_size": 4096, 00:18:27.144 "tls_version": 0, 00:18:27.144 "zerocopy_threshold": 0 00:18:27.144 } 00:18:27.144 } 00:18:27.144 ] 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "subsystem": "vmd", 00:18:27.144 "config": [] 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "subsystem": "accel", 00:18:27.144 "config": [ 00:18:27.144 { 00:18:27.144 "method": "accel_set_options", 00:18:27.144 "params": { 00:18:27.144 "buf_count": 2048, 00:18:27.144 "large_cache_size": 16, 00:18:27.144 "sequence_count": 2048, 00:18:27.144 "small_cache_size": 128, 00:18:27.144 "task_count": 2048 00:18:27.144 } 00:18:27.144 } 00:18:27.144 ] 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "subsystem": "bdev", 00:18:27.144 "config": [ 00:18:27.144 { 00:18:27.144 "method": "bdev_set_options", 00:18:27.144 "params": { 00:18:27.144 "bdev_auto_examine": true, 00:18:27.144 "bdev_io_cache_size": 256, 00:18:27.144 "bdev_io_pool_size": 65535, 00:18:27.144 "iobuf_large_cache_size": 16, 00:18:27.144 "iobuf_small_cache_size": 128 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_raid_set_options", 00:18:27.144 "params": { 00:18:27.144 "process_window_size_kb": 1024 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_iscsi_set_options", 00:18:27.144 "params": { 00:18:27.144 "timeout_sec": 30 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_nvme_set_options", 00:18:27.144 "params": { 00:18:27.144 "action_on_timeout": "none", 00:18:27.144 "allow_accel_sequence": false, 00:18:27.144 "arbitration_burst": 0, 00:18:27.144 "bdev_retry_count": 3, 00:18:27.144 "ctrlr_loss_timeout_sec": 0, 00:18:27.144 "delay_cmd_submit": true, 00:18:27.144 "fast_io_fail_timeout_sec": 0, 00:18:27.144 "generate_uuids": false, 00:18:27.144 "high_priority_weight": 0, 00:18:27.144 "io_path_stat": false, 00:18:27.144 "io_queue_requests": 0, 00:18:27.144 "keep_alive_timeout_ms": 10000, 00:18:27.144 "low_priority_weight": 0, 00:18:27.144 "medium_priority_weight": 0, 00:18:27.144 "nvme_adminq_poll_period_us": 10000, 00:18:27.144 "nvme_ioq_poll_period_us": 0, 00:18:27.144 "reconnect_delay_sec": 0, 00:18:27.144 "timeout_admin_us": 0, 00:18:27.144 "timeout_us": 0, 00:18:27.144 "transport_ack_timeout": 0, 00:18:27.144 "transport_retry_count": 4, 00:18:27.144 "transport_tos": 0 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_nvme_set_hotplug", 00:18:27.144 "params": { 00:18:27.144 "enable": false, 00:18:27.144 "period_us": 100000 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_malloc_create", 00:18:27.144 "params": { 00:18:27.144 "block_size": 4096, 00:18:27.144 "name": "malloc0", 00:18:27.144 "num_blocks": 8192, 00:18:27.144 "optimal_io_boundary": 0, 00:18:27.144 "physical_block_size": 4096, 00:18:27.144 "uuid": "7ed3772c-d9ae-48d2-8ec1-133190bc7f30" 00:18:27.144 } 00:18:27.144 }, 00:18:27.144 { 00:18:27.144 "method": "bdev_wait_for_examine" 00:18:27.144 } 00:18:27.144 ] 00:18:27.144 }, 00:18:27.144 { 00:18:27.145 "subsystem": "nbd", 00:18:27.145 "config": [] 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "subsystem": "scheduler", 00:18:27.145 "config": [ 00:18:27.145 { 00:18:27.145 "method": "framework_set_scheduler", 00:18:27.145 "params": { 00:18:27.145 "name": "static" 00:18:27.145 } 00:18:27.145 } 00:18:27.145 ] 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "subsystem": "nvmf", 00:18:27.145 "config": [ 00:18:27.145 { 00:18:27.145 "method": "nvmf_set_config", 00:18:27.145 "params": { 00:18:27.145 "admin_cmd_passthru": { 00:18:27.145 "identify_ctrlr": false 00:18:27.145 }, 00:18:27.145 "discovery_filter": "match_any" 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_set_max_subsystems", 00:18:27.145 "params": { 00:18:27.145 "max_subsystems": 1024 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_set_crdt", 00:18:27.145 "params": { 00:18:27.145 "crdt1": 0, 00:18:27.145 "crdt2": 0, 00:18:27.145 "crdt3": 0 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_create_transport", 00:18:27.145 "params": { 00:18:27.145 "abort_timeout_sec": 1, 00:18:27.145 "buf_cache_size": 4294967295, 00:18:27.145 "c2h_success": false, 00:18:27.145 "dif_insert_or_strip": false, 00:18:27.145 "in_capsule_data_size": 4096, 00:18:27.145 "io_unit_size": 131072, 00:18:27.145 "max_aq_depth": 128, 00:18:27.145 "max_io_qpairs_per_ctrlr": 127, 00:18:27.145 "max_io_size": 131072, 00:18:27.145 "max_queue_depth": 128, 00:18:27.145 "num_shared_buffers": 511, 00:18:27.145 "sock_priority": 0, 00:18:27.145 "trtype": "TCP", 00:18:27.145 "zcopy": false 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_create_subsystem", 00:18:27.145 "params": { 00:18:27.145 "allow_any_host": false, 00:18:27.145 "ana_reporting": false, 00:18:27.145 "max_cntlid": 65519, 00:18:27.145 "max_namespaces": 10, 00:18:27.145 "min_cntlid": 1, 00:18:27.145 "model_number": "SPDK bdev Controller", 00:18:27.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.145 "serial_number": "SPDK00000000000001" 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_subsystem_add_host", 00:18:27.145 "params": { 00:18:27.145 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.145 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_subsystem_add_ns", 00:18:27.145 "params": { 00:18:27.145 "namespace": { 00:18:27.145 "bdev_name": "malloc0", 00:18:27.145 "nguid": "7ED3772CD9AE48D28EC1133190BC7F30", 00:18:27.145 "nsid": 1, 00:18:27.145 "uuid": "7ed3772c-d9ae-48d2-8ec1-133190bc7f30" 00:18:27.145 }, 00:18:27.145 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:27.145 } 00:18:27.145 }, 00:18:27.145 { 00:18:27.145 "method": "nvmf_subsystem_add_listener", 00:18:27.145 "params": { 00:18:27.145 "listen_address": { 00:18:27.145 "adrfam": "IPv4", 00:18:27.145 "traddr": "10.0.0.2", 00:18:27.145 "trsvcid": "4420", 00:18:27.145 "trtype": "TCP" 00:18:27.145 }, 00:18:27.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.145 "secure_channel": true 00:18:27.145 } 00:18:27.145 } 00:18:27.145 ] 00:18:27.145 } 00:18:27.145 ] 00:18:27.145 }' 00:18:27.145 06:48:41 -- nvmf/common.sh@469 -- # nvmfpid=79120 00:18:27.145 06:48:41 -- nvmf/common.sh@470 -- # waitforlisten 79120 00:18:27.145 06:48:41 -- common/autotest_common.sh@829 -- # '[' -z 79120 ']' 00:18:27.145 06:48:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.145 06:48:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.145 06:48:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.145 06:48:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.145 06:48:41 -- common/autotest_common.sh@10 -- # set +x 00:18:27.145 06:48:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:27.404 [2024-12-14 06:48:41.193487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:27.404 [2024-12-14 06:48:41.193588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.404 [2024-12-14 06:48:41.327931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.663 [2024-12-14 06:48:41.429877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.663 [2024-12-14 06:48:41.430051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.663 [2024-12-14 06:48:41.430081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.663 [2024-12-14 06:48:41.430089] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.663 [2024-12-14 06:48:41.430126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.922 [2024-12-14 06:48:41.673566] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.922 [2024-12-14 06:48:41.705522] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.922 [2024-12-14 06:48:41.705733] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.490 06:48:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.490 06:48:42 -- common/autotest_common.sh@862 -- # return 0 00:18:28.490 06:48:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.490 06:48:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.490 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.490 06:48:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.490 06:48:42 -- target/tls.sh@216 -- # bdevperf_pid=79164 00:18:28.490 06:48:42 -- target/tls.sh@217 -- # waitforlisten 79164 /var/tmp/bdevperf.sock 00:18:28.490 06:48:42 -- common/autotest_common.sh@829 -- # '[' -z 79164 ']' 00:18:28.490 06:48:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.490 06:48:42 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:28.490 06:48:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.490 06:48:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.490 06:48:42 -- target/tls.sh@213 -- # echo '{ 00:18:28.490 "subsystems": [ 00:18:28.490 { 00:18:28.490 "subsystem": "iobuf", 00:18:28.490 "config": [ 00:18:28.490 { 00:18:28.490 "method": "iobuf_set_options", 00:18:28.490 "params": { 00:18:28.490 "large_bufsize": 135168, 00:18:28.490 "large_pool_count": 1024, 00:18:28.490 "small_bufsize": 8192, 00:18:28.490 "small_pool_count": 8192 00:18:28.490 } 00:18:28.490 } 00:18:28.490 ] 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "subsystem": "sock", 00:18:28.490 "config": [ 00:18:28.490 { 00:18:28.490 "method": "sock_impl_set_options", 00:18:28.490 "params": { 00:18:28.490 "enable_ktls": false, 00:18:28.490 "enable_placement_id": 0, 00:18:28.490 "enable_quickack": false, 00:18:28.490 "enable_recv_pipe": true, 00:18:28.490 "enable_zerocopy_send_client": false, 00:18:28.490 "enable_zerocopy_send_server": true, 00:18:28.490 "impl_name": "posix", 00:18:28.490 "recv_buf_size": 2097152, 00:18:28.490 "send_buf_size": 2097152, 00:18:28.490 "tls_version": 0, 00:18:28.490 "zerocopy_threshold": 0 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "sock_impl_set_options", 00:18:28.490 "params": { 00:18:28.490 "enable_ktls": false, 00:18:28.490 "enable_placement_id": 0, 00:18:28.490 "enable_quickack": false, 00:18:28.490 "enable_recv_pipe": true, 00:18:28.490 "enable_zerocopy_send_client": false, 00:18:28.490 "enable_zerocopy_send_server": true, 00:18:28.490 "impl_name": "ssl", 00:18:28.490 "recv_buf_size": 4096, 00:18:28.490 "send_buf_size": 4096, 00:18:28.490 "tls_version": 0, 00:18:28.490 "zerocopy_threshold": 0 00:18:28.490 } 00:18:28.490 } 00:18:28.490 ] 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "subsystem": "vmd", 00:18:28.490 "config": [] 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "subsystem": "accel", 00:18:28.490 "config": [ 00:18:28.490 { 00:18:28.490 "method": "accel_set_options", 00:18:28.490 "params": { 00:18:28.490 "buf_count": 2048, 00:18:28.490 "large_cache_size": 16, 00:18:28.490 "sequence_count": 2048, 00:18:28.490 "small_cache_size": 128, 00:18:28.490 "task_count": 2048 00:18:28.490 } 00:18:28.490 } 00:18:28.490 ] 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "subsystem": "bdev", 00:18:28.490 "config": [ 00:18:28.490 { 00:18:28.490 "method": "bdev_set_options", 00:18:28.490 "params": { 00:18:28.490 "bdev_auto_examine": true, 00:18:28.490 "bdev_io_cache_size": 256, 00:18:28.490 "bdev_io_pool_size": 65535, 00:18:28.490 "iobuf_large_cache_size": 16, 00:18:28.490 "iobuf_small_cache_size": 128 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_raid_set_options", 00:18:28.490 "params": { 00:18:28.490 "process_window_size_kb": 1024 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_iscsi_set_options", 00:18:28.490 "params": { 00:18:28.490 "timeout_sec": 30 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_nvme_set_options", 00:18:28.490 "params": { 00:18:28.490 "action_on_timeout": "none", 00:18:28.490 "allow_accel_sequence": false, 00:18:28.490 "arbitration_burst": 0, 00:18:28.490 "bdev_retry_count": 3, 00:18:28.490 "ctrlr_loss_timeout_sec": 0, 00:18:28.490 "delay_cmd_submit": true, 00:18:28.490 "fast_io_fail_timeout_sec": 0, 00:18:28.490 "generate_uuids": false, 00:18:28.490 "high_priority_weight": 0, 00:18:28.490 "io_path_stat": false, 00:18:28.490 "io_queue_requests": 512, 00:18:28.490 "keep_alive_timeout_ms": 10000, 00:18:28.490 "low_priority_weight": 0, 00:18:28.490 "medium_priority_weight": 0, 00:18:28.490 "nvme_adminq_poll_period_us": 10000, 00:18:28.490 "nvme_ioq_poll_period_us": 0, 00:18:28.490 "reconnect_delay_sec": 0, 00:18:28.490 "timeout_admin_us": 0, 00:18:28.490 "timeout_us": 0, 00:18:28.490 "transport_ack_timeout": 0, 00:18:28.490 "transport_retry_count": 4, 00:18:28.490 "transport_tos": 0 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_nvme_attach_controller", 00:18:28.490 "params": { 00:18:28.490 "adrfam": "IPv4", 00:18:28.490 "ctrlr_loss_timeout_sec": 0, 00:18:28.490 "ddgst": false, 00:18:28.490 "fast_io_fail_timeout_sec": 0, 00:18:28.490 "hdgst": false, 00:18:28.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.490 "name": "TLSTEST", 00:18:28.490 "prchk_guard": false, 00:18:28.490 "prchk_reftag": false, 00:18:28.490 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:28.490 "reconnect_delay_sec": 0, 00:18:28.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.490 "traddr": "10.0.0.2", 00:18:28.490 "trsvcid": "4420", 00:18:28.490 "trtype": "TCP" 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_nvme_set_hotplug", 00:18:28.490 "params": { 00:18:28.490 "enable": false, 00:18:28.490 "period_us": 100000 00:18:28.490 } 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "method": "bdev_wait_for_examine" 00:18:28.490 } 00:18:28.490 ] 00:18:28.490 }, 00:18:28.490 { 00:18:28.490 "subsystem": "nbd", 00:18:28.490 "config": [] 00:18:28.490 } 00:18:28.490 ] 00:18:28.490 }' 00:18:28.490 06:48:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.490 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.490 [2024-12-14 06:48:42.289059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.490 [2024-12-14 06:48:42.289148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79164 ] 00:18:28.490 [2024-12-14 06:48:42.430355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.749 [2024-12-14 06:48:42.555026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.749 [2024-12-14 06:48:42.736045] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.316 06:48:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.316 06:48:43 -- common/autotest_common.sh@862 -- # return 0 00:18:29.316 06:48:43 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:29.575 Running I/O for 10 seconds... 00:18:39.548 00:18:39.548 Latency(us) 00:18:39.548 [2024-12-14T06:48:53.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.548 [2024-12-14T06:48:53.540Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.548 Verification LBA range: start 0x0 length 0x2000 00:18:39.548 TLSTESTn1 : 10.02 6193.87 24.19 0.00 0.00 20630.69 5540.77 18588.39 00:18:39.548 [2024-12-14T06:48:53.540Z] =================================================================================================================== 00:18:39.548 [2024-12-14T06:48:53.540Z] Total : 6193.87 24.19 0.00 0.00 20630.69 5540.77 18588.39 00:18:39.548 0 00:18:39.548 06:48:53 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.548 06:48:53 -- target/tls.sh@223 -- # killprocess 79164 00:18:39.548 06:48:53 -- common/autotest_common.sh@936 -- # '[' -z 79164 ']' 00:18:39.548 06:48:53 -- common/autotest_common.sh@940 -- # kill -0 79164 00:18:39.548 06:48:53 -- common/autotest_common.sh@941 -- # uname 00:18:39.548 06:48:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.548 06:48:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79164 00:18:39.548 06:48:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:39.548 killing process with pid 79164 00:18:39.548 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.548 00:18:39.548 Latency(us) 00:18:39.548 [2024-12-14T06:48:53.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.548 [2024-12-14T06:48:53.540Z] =================================================================================================================== 00:18:39.548 [2024-12-14T06:48:53.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.548 06:48:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:39.548 06:48:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79164' 00:18:39.548 06:48:53 -- common/autotest_common.sh@955 -- # kill 79164 00:18:39.548 06:48:53 -- common/autotest_common.sh@960 -- # wait 79164 00:18:39.807 06:48:53 -- target/tls.sh@224 -- # killprocess 79120 00:18:39.807 06:48:53 -- common/autotest_common.sh@936 -- # '[' -z 79120 ']' 00:18:39.807 06:48:53 -- common/autotest_common.sh@940 -- # kill -0 79120 00:18:39.807 06:48:53 -- common/autotest_common.sh@941 -- # uname 00:18:39.807 06:48:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.807 06:48:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79120 00:18:39.807 06:48:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:39.807 killing process with pid 79120 00:18:39.807 06:48:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:39.807 06:48:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79120' 00:18:39.807 06:48:53 -- common/autotest_common.sh@955 -- # kill 79120 00:18:39.807 06:48:53 -- common/autotest_common.sh@960 -- # wait 79120 00:18:40.374 06:48:54 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:40.374 06:48:54 -- target/tls.sh@227 -- # cleanup 00:18:40.374 06:48:54 -- target/tls.sh@15 -- # process_shm --id 0 00:18:40.374 06:48:54 -- common/autotest_common.sh@806 -- # type=--id 00:18:40.374 06:48:54 -- common/autotest_common.sh@807 -- # id=0 00:18:40.374 06:48:54 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:40.374 06:48:54 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:40.374 06:48:54 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:40.374 06:48:54 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:40.374 06:48:54 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:40.374 06:48:54 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:40.374 nvmf_trace.0 00:18:40.374 06:48:54 -- common/autotest_common.sh@821 -- # return 0 00:18:40.374 06:48:54 -- target/tls.sh@16 -- # killprocess 79164 00:18:40.374 06:48:54 -- common/autotest_common.sh@936 -- # '[' -z 79164 ']' 00:18:40.374 06:48:54 -- common/autotest_common.sh@940 -- # kill -0 79164 00:18:40.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79164) - No such process 00:18:40.375 Process with pid 79164 is not found 00:18:40.375 06:48:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79164 is not found' 00:18:40.375 06:48:54 -- target/tls.sh@17 -- # nvmftestfini 00:18:40.375 06:48:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.375 06:48:54 -- nvmf/common.sh@116 -- # sync 00:18:40.375 06:48:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.375 06:48:54 -- nvmf/common.sh@119 -- # set +e 00:18:40.375 06:48:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.375 06:48:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.375 rmmod nvme_tcp 00:18:40.375 rmmod nvme_fabrics 00:18:40.375 rmmod nvme_keyring 00:18:40.375 06:48:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.375 06:48:54 -- nvmf/common.sh@123 -- # set -e 00:18:40.375 06:48:54 -- nvmf/common.sh@124 -- # return 0 00:18:40.375 06:48:54 -- nvmf/common.sh@477 -- # '[' -n 79120 ']' 00:18:40.375 06:48:54 -- nvmf/common.sh@478 -- # killprocess 79120 00:18:40.375 06:48:54 -- common/autotest_common.sh@936 -- # '[' -z 79120 ']' 00:18:40.375 06:48:54 -- common/autotest_common.sh@940 -- # kill -0 79120 00:18:40.375 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79120) - No such process 00:18:40.375 Process with pid 79120 is not found 00:18:40.375 06:48:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79120 is not found' 00:18:40.375 06:48:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.375 06:48:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.375 06:48:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.375 06:48:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.375 06:48:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.375 06:48:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.375 06:48:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.375 06:48:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.375 06:48:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:40.375 06:48:54 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:40.375 00:18:40.375 real 1m12.794s 00:18:40.375 user 1m51.351s 00:18:40.375 sys 0m25.572s 00:18:40.375 06:48:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:40.375 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.375 ************************************ 00:18:40.375 END TEST nvmf_tls 00:18:40.375 ************************************ 00:18:40.375 06:48:54 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:40.375 06:48:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:40.375 06:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:40.375 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.634 ************************************ 00:18:40.634 START TEST nvmf_fips 00:18:40.634 ************************************ 00:18:40.634 06:48:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:40.634 * Looking for test storage... 00:18:40.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:40.634 06:48:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:40.634 06:48:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:40.634 06:48:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.634 06:48:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.634 06:48:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.634 06:48:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.634 06:48:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.634 06:48:54 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.634 06:48:54 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.634 06:48:54 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.634 06:48:54 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.634 06:48:54 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.634 06:48:54 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.634 06:48:54 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.634 06:48:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.634 06:48:54 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.634 06:48:54 -- scripts/common.sh@344 -- # : 1 00:18:40.634 06:48:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.634 06:48:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.634 06:48:54 -- scripts/common.sh@364 -- # decimal 1 00:18:40.634 06:48:54 -- scripts/common.sh@352 -- # local d=1 00:18:40.634 06:48:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.634 06:48:54 -- scripts/common.sh@354 -- # echo 1 00:18:40.634 06:48:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.634 06:48:54 -- scripts/common.sh@365 -- # decimal 2 00:18:40.634 06:48:54 -- scripts/common.sh@352 -- # local d=2 00:18:40.634 06:48:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.634 06:48:54 -- scripts/common.sh@354 -- # echo 2 00:18:40.634 06:48:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.634 06:48:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.634 06:48:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.634 06:48:54 -- scripts/common.sh@367 -- # return 0 00:18:40.634 06:48:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.634 06:48:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.634 --rc genhtml_branch_coverage=1 00:18:40.634 --rc genhtml_function_coverage=1 00:18:40.634 --rc genhtml_legend=1 00:18:40.634 --rc geninfo_all_blocks=1 00:18:40.634 --rc geninfo_unexecuted_blocks=1 00:18:40.634 00:18:40.634 ' 00:18:40.634 06:48:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.634 --rc genhtml_branch_coverage=1 00:18:40.634 --rc genhtml_function_coverage=1 00:18:40.634 --rc genhtml_legend=1 00:18:40.634 --rc geninfo_all_blocks=1 00:18:40.634 --rc geninfo_unexecuted_blocks=1 00:18:40.634 00:18:40.634 ' 00:18:40.634 06:48:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.634 --rc genhtml_branch_coverage=1 00:18:40.634 --rc genhtml_function_coverage=1 00:18:40.634 --rc genhtml_legend=1 00:18:40.634 --rc geninfo_all_blocks=1 00:18:40.634 --rc geninfo_unexecuted_blocks=1 00:18:40.634 00:18:40.634 ' 00:18:40.634 06:48:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.634 --rc genhtml_branch_coverage=1 00:18:40.634 --rc genhtml_function_coverage=1 00:18:40.634 --rc genhtml_legend=1 00:18:40.634 --rc geninfo_all_blocks=1 00:18:40.634 --rc geninfo_unexecuted_blocks=1 00:18:40.634 00:18:40.634 ' 00:18:40.634 06:48:54 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.634 06:48:54 -- nvmf/common.sh@7 -- # uname -s 00:18:40.634 06:48:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.634 06:48:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.634 06:48:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.634 06:48:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.634 06:48:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.634 06:48:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.634 06:48:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.634 06:48:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.634 06:48:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.634 06:48:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.634 06:48:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:40.634 06:48:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:40.634 06:48:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.634 06:48:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.634 06:48:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.634 06:48:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.635 06:48:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.635 06:48:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.635 06:48:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.635 06:48:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.635 06:48:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.635 06:48:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.635 06:48:54 -- paths/export.sh@5 -- # export PATH 00:18:40.635 06:48:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.635 06:48:54 -- nvmf/common.sh@46 -- # : 0 00:18:40.635 06:48:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.635 06:48:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.635 06:48:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.635 06:48:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.635 06:48:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.635 06:48:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.635 06:48:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.635 06:48:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.635 06:48:54 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.635 06:48:54 -- fips/fips.sh@89 -- # check_openssl_version 00:18:40.635 06:48:54 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:40.635 06:48:54 -- fips/fips.sh@85 -- # openssl version 00:18:40.635 06:48:54 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:40.635 06:48:54 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:40.635 06:48:54 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:40.635 06:48:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.635 06:48:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.635 06:48:54 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.635 06:48:54 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.635 06:48:54 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.635 06:48:54 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.635 06:48:54 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:40.635 06:48:54 -- scripts/common.sh@339 -- # ver1_l=3 00:18:40.635 06:48:54 -- scripts/common.sh@340 -- # ver2_l=3 00:18:40.635 06:48:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.635 06:48:54 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.635 06:48:54 -- scripts/common.sh@347 -- # : 1 00:18:40.635 06:48:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.635 06:48:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.635 06:48:54 -- scripts/common.sh@364 -- # decimal 3 00:18:40.635 06:48:54 -- scripts/common.sh@352 -- # local d=3 00:18:40.635 06:48:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:40.635 06:48:54 -- scripts/common.sh@354 -- # echo 3 00:18:40.635 06:48:54 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:40.635 06:48:54 -- scripts/common.sh@365 -- # decimal 3 00:18:40.635 06:48:54 -- scripts/common.sh@352 -- # local d=3 00:18:40.635 06:48:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:40.635 06:48:54 -- scripts/common.sh@354 -- # echo 3 00:18:40.635 06:48:54 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:40.635 06:48:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.635 06:48:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.635 06:48:54 -- scripts/common.sh@363 -- # (( v++ )) 00:18:40.635 06:48:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.635 06:48:54 -- scripts/common.sh@364 -- # decimal 1 00:18:40.635 06:48:54 -- scripts/common.sh@352 -- # local d=1 00:18:40.635 06:48:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.635 06:48:54 -- scripts/common.sh@354 -- # echo 1 00:18:40.635 06:48:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.635 06:48:54 -- scripts/common.sh@365 -- # decimal 0 00:18:40.635 06:48:54 -- scripts/common.sh@352 -- # local d=0 00:18:40.635 06:48:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:40.635 06:48:54 -- scripts/common.sh@354 -- # echo 0 00:18:40.635 06:48:54 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:40.635 06:48:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.635 06:48:54 -- scripts/common.sh@366 -- # return 0 00:18:40.635 06:48:54 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:40.635 06:48:54 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:40.635 06:48:54 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:40.894 06:48:54 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:40.894 06:48:54 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:40.894 06:48:54 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:40.894 06:48:54 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:40.894 06:48:54 -- fips/fips.sh@113 -- # build_openssl_config 00:18:40.894 06:48:54 -- fips/fips.sh@37 -- # cat 00:18:40.894 06:48:54 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:40.894 06:48:54 -- fips/fips.sh@58 -- # cat - 00:18:40.894 06:48:54 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:40.894 06:48:54 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:40.894 06:48:54 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:40.894 06:48:54 -- fips/fips.sh@116 -- # openssl list -providers 00:18:40.894 06:48:54 -- fips/fips.sh@116 -- # grep name 00:18:40.894 06:48:54 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:40.894 06:48:54 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:40.894 06:48:54 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:40.894 06:48:54 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:40.894 06:48:54 -- fips/fips.sh@127 -- # : 00:18:40.894 06:48:54 -- common/autotest_common.sh@650 -- # local es=0 00:18:40.894 06:48:54 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:40.894 06:48:54 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:40.894 06:48:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.894 06:48:54 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:40.894 06:48:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.894 06:48:54 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:40.894 06:48:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.894 06:48:54 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:40.894 06:48:54 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:40.894 06:48:54 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:40.894 Error setting digest 00:18:40.894 40924446777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:40.894 40924446777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:40.894 06:48:54 -- common/autotest_common.sh@653 -- # es=1 00:18:40.894 06:48:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.894 06:48:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.894 06:48:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.894 06:48:54 -- fips/fips.sh@130 -- # nvmftestinit 00:18:40.894 06:48:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.894 06:48:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.894 06:48:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.894 06:48:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.894 06:48:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.894 06:48:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.894 06:48:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.894 06:48:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.894 06:48:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.894 06:48:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.894 06:48:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.894 06:48:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.894 06:48:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.894 06:48:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.894 06:48:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.894 06:48:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.894 06:48:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.894 06:48:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.894 06:48:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.894 06:48:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.894 06:48:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.894 06:48:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.895 06:48:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.895 06:48:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.895 06:48:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.895 06:48:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.895 06:48:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.895 06:48:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.895 Cannot find device "nvmf_tgt_br" 00:18:40.895 06:48:54 -- nvmf/common.sh@154 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.895 Cannot find device "nvmf_tgt_br2" 00:18:40.895 06:48:54 -- nvmf/common.sh@155 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.895 06:48:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.895 Cannot find device "nvmf_tgt_br" 00:18:40.895 06:48:54 -- nvmf/common.sh@157 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.895 Cannot find device "nvmf_tgt_br2" 00:18:40.895 06:48:54 -- nvmf/common.sh@158 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.895 06:48:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.895 06:48:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.895 06:48:54 -- nvmf/common.sh@161 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.895 06:48:54 -- nvmf/common.sh@162 -- # true 00:18:40.895 06:48:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.895 06:48:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.895 06:48:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.154 06:48:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.154 06:48:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.154 06:48:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.154 06:48:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.154 06:48:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.154 06:48:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.154 06:48:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:41.154 06:48:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:41.154 06:48:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:41.154 06:48:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:41.154 06:48:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.154 06:48:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.154 06:48:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.154 06:48:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:41.154 06:48:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:41.154 06:48:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.154 06:48:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.154 06:48:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.154 06:48:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.154 06:48:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.154 06:48:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:41.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:41.154 00:18:41.154 --- 10.0.0.2 ping statistics --- 00:18:41.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.154 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:41.154 06:48:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:41.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:41.154 00:18:41.154 --- 10.0.0.3 ping statistics --- 00:18:41.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.154 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:41.154 06:48:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:41.154 00:18:41.154 --- 10.0.0.1 ping statistics --- 00:18:41.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.154 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:41.154 06:48:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.154 06:48:55 -- nvmf/common.sh@421 -- # return 0 00:18:41.154 06:48:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:41.154 06:48:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.154 06:48:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:41.154 06:48:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:41.154 06:48:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.154 06:48:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:41.154 06:48:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:41.154 06:48:55 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:41.154 06:48:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:41.154 06:48:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.154 06:48:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.154 06:48:55 -- nvmf/common.sh@469 -- # nvmfpid=79531 00:18:41.154 06:48:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.154 06:48:55 -- nvmf/common.sh@470 -- # waitforlisten 79531 00:18:41.154 06:48:55 -- common/autotest_common.sh@829 -- # '[' -z 79531 ']' 00:18:41.154 06:48:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.154 06:48:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.154 06:48:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.154 06:48:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.154 06:48:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.424 [2024-12-14 06:48:55.180430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:41.424 [2024-12-14 06:48:55.180523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.424 [2024-12-14 06:48:55.322415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.699 [2024-12-14 06:48:55.447031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:41.699 [2024-12-14 06:48:55.447207] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.699 [2024-12-14 06:48:55.447223] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.699 [2024-12-14 06:48:55.447236] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.699 [2024-12-14 06:48:55.447275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.266 06:48:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.266 06:48:56 -- common/autotest_common.sh@862 -- # return 0 00:18:42.266 06:48:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:42.266 06:48:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.266 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:18:42.266 06:48:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.266 06:48:56 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:42.266 06:48:56 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:42.266 06:48:56 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:42.266 06:48:56 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:42.266 06:48:56 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:42.266 06:48:56 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:42.266 06:48:56 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:42.266 06:48:56 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.525 [2024-12-14 06:48:56.466840] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.525 [2024-12-14 06:48:56.482787] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.525 [2024-12-14 06:48:56.483003] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.783 malloc0 00:18:42.783 06:48:56 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.783 06:48:56 -- fips/fips.sh@147 -- # bdevperf_pid=79589 00:18:42.783 06:48:56 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.783 06:48:56 -- fips/fips.sh@148 -- # waitforlisten 79589 /var/tmp/bdevperf.sock 00:18:42.783 06:48:56 -- common/autotest_common.sh@829 -- # '[' -z 79589 ']' 00:18:42.783 06:48:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.783 06:48:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.783 06:48:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.783 06:48:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.783 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:18:42.783 [2024-12-14 06:48:56.628092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:42.783 [2024-12-14 06:48:56.628195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79589 ] 00:18:42.783 [2024-12-14 06:48:56.770023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.041 [2024-12-14 06:48:56.892345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.609 06:48:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.609 06:48:57 -- common/autotest_common.sh@862 -- # return 0 00:18:43.609 06:48:57 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:43.867 [2024-12-14 06:48:57.825613] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.125 TLSTESTn1 00:18:44.125 06:48:57 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.125 Running I/O for 10 seconds... 00:18:54.098 00:18:54.098 Latency(us) 00:18:54.098 [2024-12-14T06:49:08.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.098 [2024-12-14T06:49:08.090Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.098 Verification LBA range: start 0x0 length 0x2000 00:18:54.098 TLSTESTn1 : 10.01 6709.33 26.21 0.00 0.00 19048.42 4200.26 20852.36 00:18:54.098 [2024-12-14T06:49:08.090Z] =================================================================================================================== 00:18:54.098 [2024-12-14T06:49:08.090Z] Total : 6709.33 26.21 0.00 0.00 19048.42 4200.26 20852.36 00:18:54.098 0 00:18:54.098 06:49:08 -- fips/fips.sh@1 -- # cleanup 00:18:54.098 06:49:08 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:54.098 06:49:08 -- common/autotest_common.sh@806 -- # type=--id 00:18:54.098 06:49:08 -- common/autotest_common.sh@807 -- # id=0 00:18:54.098 06:49:08 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:54.098 06:49:08 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:54.357 06:49:08 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:54.357 06:49:08 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:54.357 06:49:08 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:54.357 06:49:08 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:54.357 nvmf_trace.0 00:18:54.357 06:49:08 -- common/autotest_common.sh@821 -- # return 0 00:18:54.357 06:49:08 -- fips/fips.sh@16 -- # killprocess 79589 00:18:54.357 06:49:08 -- common/autotest_common.sh@936 -- # '[' -z 79589 ']' 00:18:54.357 06:49:08 -- common/autotest_common.sh@940 -- # kill -0 79589 00:18:54.357 06:49:08 -- common/autotest_common.sh@941 -- # uname 00:18:54.357 06:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.357 06:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79589 00:18:54.357 06:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:54.357 06:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:54.357 06:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79589' 00:18:54.357 killing process with pid 79589 00:18:54.357 06:49:08 -- common/autotest_common.sh@955 -- # kill 79589 00:18:54.357 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.357 00:18:54.357 Latency(us) 00:18:54.357 [2024-12-14T06:49:08.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.357 [2024-12-14T06:49:08.349Z] =================================================================================================================== 00:18:54.357 [2024-12-14T06:49:08.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.357 06:49:08 -- common/autotest_common.sh@960 -- # wait 79589 00:18:54.616 06:49:08 -- fips/fips.sh@17 -- # nvmftestfini 00:18:54.616 06:49:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:54.616 06:49:08 -- nvmf/common.sh@116 -- # sync 00:18:54.616 06:49:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:54.616 06:49:08 -- nvmf/common.sh@119 -- # set +e 00:18:54.616 06:49:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:54.616 06:49:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:54.616 rmmod nvme_tcp 00:18:54.616 rmmod nvme_fabrics 00:18:54.616 rmmod nvme_keyring 00:18:54.875 06:49:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:54.875 06:49:08 -- nvmf/common.sh@123 -- # set -e 00:18:54.875 06:49:08 -- nvmf/common.sh@124 -- # return 0 00:18:54.875 06:49:08 -- nvmf/common.sh@477 -- # '[' -n 79531 ']' 00:18:54.875 06:49:08 -- nvmf/common.sh@478 -- # killprocess 79531 00:18:54.875 06:49:08 -- common/autotest_common.sh@936 -- # '[' -z 79531 ']' 00:18:54.875 06:49:08 -- common/autotest_common.sh@940 -- # kill -0 79531 00:18:54.875 06:49:08 -- common/autotest_common.sh@941 -- # uname 00:18:54.875 06:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.875 06:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79531 00:18:54.875 06:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:54.875 killing process with pid 79531 00:18:54.875 06:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:54.875 06:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79531' 00:18:54.875 06:49:08 -- common/autotest_common.sh@955 -- # kill 79531 00:18:54.875 06:49:08 -- common/autotest_common.sh@960 -- # wait 79531 00:18:55.134 06:49:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.134 06:49:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:55.134 06:49:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:55.134 06:49:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.134 06:49:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:55.134 06:49:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.134 06:49:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.134 06:49:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.134 06:49:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:55.134 06:49:09 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:55.134 00:18:55.134 real 0m14.645s 00:18:55.134 user 0m19.597s 00:18:55.134 sys 0m6.028s 00:18:55.134 06:49:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:55.134 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:18:55.134 ************************************ 00:18:55.134 END TEST nvmf_fips 00:18:55.134 ************************************ 00:18:55.134 06:49:09 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:55.134 06:49:09 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:55.134 06:49:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:55.134 06:49:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.134 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:18:55.134 ************************************ 00:18:55.134 START TEST nvmf_fuzz 00:18:55.134 ************************************ 00:18:55.134 06:49:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:55.393 * Looking for test storage... 00:18:55.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:55.393 06:49:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:55.393 06:49:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:55.393 06:49:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:55.393 06:49:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:55.393 06:49:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:55.393 06:49:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:55.393 06:49:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:55.393 06:49:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:55.393 06:49:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:55.393 06:49:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.393 06:49:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:55.393 06:49:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:55.393 06:49:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:55.393 06:49:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:55.393 06:49:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:55.393 06:49:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:55.393 06:49:09 -- scripts/common.sh@344 -- # : 1 00:18:55.393 06:49:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:55.393 06:49:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.393 06:49:09 -- scripts/common.sh@364 -- # decimal 1 00:18:55.393 06:49:09 -- scripts/common.sh@352 -- # local d=1 00:18:55.393 06:49:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.393 06:49:09 -- scripts/common.sh@354 -- # echo 1 00:18:55.393 06:49:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:55.393 06:49:09 -- scripts/common.sh@365 -- # decimal 2 00:18:55.393 06:49:09 -- scripts/common.sh@352 -- # local d=2 00:18:55.393 06:49:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.393 06:49:09 -- scripts/common.sh@354 -- # echo 2 00:18:55.393 06:49:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:55.393 06:49:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:55.394 06:49:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:55.394 06:49:09 -- scripts/common.sh@367 -- # return 0 00:18:55.394 06:49:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.394 06:49:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.394 --rc genhtml_branch_coverage=1 00:18:55.394 --rc genhtml_function_coverage=1 00:18:55.394 --rc genhtml_legend=1 00:18:55.394 --rc geninfo_all_blocks=1 00:18:55.394 --rc geninfo_unexecuted_blocks=1 00:18:55.394 00:18:55.394 ' 00:18:55.394 06:49:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.394 --rc genhtml_branch_coverage=1 00:18:55.394 --rc genhtml_function_coverage=1 00:18:55.394 --rc genhtml_legend=1 00:18:55.394 --rc geninfo_all_blocks=1 00:18:55.394 --rc geninfo_unexecuted_blocks=1 00:18:55.394 00:18:55.394 ' 00:18:55.394 06:49:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.394 --rc genhtml_branch_coverage=1 00:18:55.394 --rc genhtml_function_coverage=1 00:18:55.394 --rc genhtml_legend=1 00:18:55.394 --rc geninfo_all_blocks=1 00:18:55.394 --rc geninfo_unexecuted_blocks=1 00:18:55.394 00:18:55.394 ' 00:18:55.394 06:49:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.394 --rc genhtml_branch_coverage=1 00:18:55.394 --rc genhtml_function_coverage=1 00:18:55.394 --rc genhtml_legend=1 00:18:55.394 --rc geninfo_all_blocks=1 00:18:55.394 --rc geninfo_unexecuted_blocks=1 00:18:55.394 00:18:55.394 ' 00:18:55.394 06:49:09 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.394 06:49:09 -- nvmf/common.sh@7 -- # uname -s 00:18:55.394 06:49:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.394 06:49:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.394 06:49:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.394 06:49:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.394 06:49:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.394 06:49:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.394 06:49:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.394 06:49:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.394 06:49:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.394 06:49:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:55.394 06:49:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:55.394 06:49:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.394 06:49:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.394 06:49:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.394 06:49:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.394 06:49:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.394 06:49:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.394 06:49:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.394 06:49:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.394 06:49:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.394 06:49:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.394 06:49:09 -- paths/export.sh@5 -- # export PATH 00:18:55.394 06:49:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.394 06:49:09 -- nvmf/common.sh@46 -- # : 0 00:18:55.394 06:49:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:55.394 06:49:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:55.394 06:49:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:55.394 06:49:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.394 06:49:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.394 06:49:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:55.394 06:49:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:55.394 06:49:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:55.394 06:49:09 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:55.394 06:49:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:55.394 06:49:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.394 06:49:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:55.394 06:49:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:55.394 06:49:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:55.394 06:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.394 06:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.394 06:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.394 06:49:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:55.394 06:49:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:55.394 06:49:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.394 06:49:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.394 06:49:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:55.394 06:49:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:55.394 06:49:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.394 06:49:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.394 06:49:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.394 06:49:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.394 06:49:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.394 06:49:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.394 06:49:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.394 06:49:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.394 06:49:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:55.394 06:49:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:55.394 Cannot find device "nvmf_tgt_br" 00:18:55.394 06:49:09 -- nvmf/common.sh@154 -- # true 00:18:55.394 06:49:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.394 Cannot find device "nvmf_tgt_br2" 00:18:55.394 06:49:09 -- nvmf/common.sh@155 -- # true 00:18:55.394 06:49:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:55.394 06:49:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:55.394 Cannot find device "nvmf_tgt_br" 00:18:55.394 06:49:09 -- nvmf/common.sh@157 -- # true 00:18:55.394 06:49:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:55.394 Cannot find device "nvmf_tgt_br2" 00:18:55.394 06:49:09 -- nvmf/common.sh@158 -- # true 00:18:55.394 06:49:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:55.653 06:49:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:55.653 06:49:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.653 06:49:09 -- nvmf/common.sh@161 -- # true 00:18:55.653 06:49:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.654 06:49:09 -- nvmf/common.sh@162 -- # true 00:18:55.654 06:49:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.654 06:49:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.654 06:49:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.654 06:49:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.654 06:49:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.654 06:49:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.654 06:49:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.654 06:49:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.654 06:49:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:55.654 06:49:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:55.654 06:49:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:55.654 06:49:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:55.654 06:49:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:55.654 06:49:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.654 06:49:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.654 06:49:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.654 06:49:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:55.654 06:49:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:55.654 06:49:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.654 06:49:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.654 06:49:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.654 06:49:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.654 06:49:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.654 06:49:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:55.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:55.654 00:18:55.654 --- 10.0.0.2 ping statistics --- 00:18:55.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.654 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:55.654 06:49:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:55.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:18:55.654 00:18:55.654 --- 10.0.0.3 ping statistics --- 00:18:55.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.654 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:55.654 06:49:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:18:55.654 00:18:55.654 --- 10.0.0.1 ping statistics --- 00:18:55.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.654 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:55.654 06:49:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.654 06:49:09 -- nvmf/common.sh@421 -- # return 0 00:18:55.654 06:49:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:55.654 06:49:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.654 06:49:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:55.654 06:49:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:55.654 06:49:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.654 06:49:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:55.654 06:49:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:55.654 06:49:09 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79932 00:18:55.654 06:49:09 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:55.654 06:49:09 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:55.654 06:49:09 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79932 00:18:55.654 06:49:09 -- common/autotest_common.sh@829 -- # '[' -z 79932 ']' 00:18:55.654 06:49:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.654 06:49:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.654 06:49:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.654 06:49:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.654 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 06:49:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.030 06:49:10 -- common/autotest_common.sh@862 -- # return 0 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.030 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.030 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:57.030 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.030 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 Malloc0 00:18:57.030 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:57.030 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.030 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.030 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.030 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.030 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.030 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.030 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:57.030 06:49:10 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:57.288 Shutting down the fuzz application 00:18:57.288 06:49:11 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:57.546 Shutting down the fuzz application 00:18:57.546 06:49:11 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.546 06:49:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.546 06:49:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.546 06:49:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.546 06:49:11 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:57.546 06:49:11 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:57.546 06:49:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.546 06:49:11 -- nvmf/common.sh@116 -- # sync 00:18:57.804 06:49:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.804 06:49:11 -- nvmf/common.sh@119 -- # set +e 00:18:57.804 06:49:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.804 06:49:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.804 rmmod nvme_tcp 00:18:57.804 rmmod nvme_fabrics 00:18:57.804 rmmod nvme_keyring 00:18:57.804 06:49:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.804 06:49:11 -- nvmf/common.sh@123 -- # set -e 00:18:57.804 06:49:11 -- nvmf/common.sh@124 -- # return 0 00:18:57.804 06:49:11 -- nvmf/common.sh@477 -- # '[' -n 79932 ']' 00:18:57.804 06:49:11 -- nvmf/common.sh@478 -- # killprocess 79932 00:18:57.804 06:49:11 -- common/autotest_common.sh@936 -- # '[' -z 79932 ']' 00:18:57.804 06:49:11 -- common/autotest_common.sh@940 -- # kill -0 79932 00:18:57.804 06:49:11 -- common/autotest_common.sh@941 -- # uname 00:18:57.804 06:49:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.804 06:49:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79932 00:18:57.804 06:49:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.804 06:49:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.805 killing process with pid 79932 00:18:57.805 06:49:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79932' 00:18:57.805 06:49:11 -- common/autotest_common.sh@955 -- # kill 79932 00:18:57.805 06:49:11 -- common/autotest_common.sh@960 -- # wait 79932 00:18:58.063 06:49:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:58.063 06:49:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:58.063 06:49:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:58.063 06:49:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.063 06:49:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:58.063 06:49:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.063 06:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.063 06:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.063 06:49:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:58.063 06:49:12 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:58.063 ************************************ 00:18:58.063 END TEST nvmf_fuzz 00:18:58.063 ************************************ 00:18:58.063 00:18:58.063 real 0m2.975s 00:18:58.063 user 0m3.077s 00:18:58.063 sys 0m0.755s 00:18:58.063 06:49:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.063 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.323 06:49:12 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:58.323 06:49:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.323 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.323 ************************************ 00:18:58.323 START TEST nvmf_multiconnection 00:18:58.323 ************************************ 00:18:58.323 06:49:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:58.323 * Looking for test storage... 00:18:58.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:58.323 06:49:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:58.323 06:49:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:58.323 06:49:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:58.323 06:49:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:58.323 06:49:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:58.323 06:49:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:58.323 06:49:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:58.323 06:49:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:58.323 06:49:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.323 06:49:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:58.323 06:49:12 -- scripts/common.sh@337 -- # local 'op=<' 00:18:58.323 06:49:12 -- scripts/common.sh@339 -- # ver1_l=2 00:18:58.323 06:49:12 -- scripts/common.sh@340 -- # ver2_l=1 00:18:58.323 06:49:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:58.323 06:49:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:58.323 06:49:12 -- scripts/common.sh@344 -- # : 1 00:18:58.323 06:49:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:58.323 06:49:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.323 06:49:12 -- scripts/common.sh@364 -- # decimal 1 00:18:58.323 06:49:12 -- scripts/common.sh@352 -- # local d=1 00:18:58.323 06:49:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.323 06:49:12 -- scripts/common.sh@354 -- # echo 1 00:18:58.323 06:49:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:58.323 06:49:12 -- scripts/common.sh@365 -- # decimal 2 00:18:58.323 06:49:12 -- scripts/common.sh@352 -- # local d=2 00:18:58.323 06:49:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.323 06:49:12 -- scripts/common.sh@354 -- # echo 2 00:18:58.323 06:49:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:58.323 06:49:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:58.323 06:49:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:58.323 06:49:12 -- scripts/common.sh@367 -- # return 0 00:18:58.323 06:49:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:58.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.323 --rc genhtml_branch_coverage=1 00:18:58.323 --rc genhtml_function_coverage=1 00:18:58.323 --rc genhtml_legend=1 00:18:58.323 --rc geninfo_all_blocks=1 00:18:58.323 --rc geninfo_unexecuted_blocks=1 00:18:58.323 00:18:58.323 ' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:58.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.323 --rc genhtml_branch_coverage=1 00:18:58.323 --rc genhtml_function_coverage=1 00:18:58.323 --rc genhtml_legend=1 00:18:58.323 --rc geninfo_all_blocks=1 00:18:58.323 --rc geninfo_unexecuted_blocks=1 00:18:58.323 00:18:58.323 ' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:58.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.323 --rc genhtml_branch_coverage=1 00:18:58.323 --rc genhtml_function_coverage=1 00:18:58.323 --rc genhtml_legend=1 00:18:58.323 --rc geninfo_all_blocks=1 00:18:58.323 --rc geninfo_unexecuted_blocks=1 00:18:58.323 00:18:58.323 ' 00:18:58.323 06:49:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:58.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.323 --rc genhtml_branch_coverage=1 00:18:58.323 --rc genhtml_function_coverage=1 00:18:58.323 --rc genhtml_legend=1 00:18:58.323 --rc geninfo_all_blocks=1 00:18:58.323 --rc geninfo_unexecuted_blocks=1 00:18:58.323 00:18:58.323 ' 00:18:58.323 06:49:12 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.323 06:49:12 -- nvmf/common.sh@7 -- # uname -s 00:18:58.323 06:49:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.323 06:49:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.323 06:49:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.323 06:49:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.323 06:49:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.323 06:49:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.323 06:49:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.323 06:49:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.323 06:49:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.323 06:49:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.323 06:49:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:58.323 06:49:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:18:58.323 06:49:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.323 06:49:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.323 06:49:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.323 06:49:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.323 06:49:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.323 06:49:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.323 06:49:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.323 06:49:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.323 06:49:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.324 06:49:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.324 06:49:12 -- paths/export.sh@5 -- # export PATH 00:18:58.324 06:49:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.324 06:49:12 -- nvmf/common.sh@46 -- # : 0 00:18:58.324 06:49:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:58.324 06:49:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:58.324 06:49:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:58.324 06:49:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.324 06:49:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.324 06:49:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:58.324 06:49:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:58.324 06:49:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:58.324 06:49:12 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.324 06:49:12 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.324 06:49:12 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:58.324 06:49:12 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:58.324 06:49:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:58.324 06:49:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.324 06:49:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:58.324 06:49:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:58.324 06:49:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:58.324 06:49:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.324 06:49:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.324 06:49:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.324 06:49:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:58.324 06:49:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:58.324 06:49:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:58.324 06:49:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:58.324 06:49:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:58.324 06:49:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:58.324 06:49:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.324 06:49:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.324 06:49:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.324 06:49:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:58.324 06:49:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.324 06:49:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.324 06:49:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.324 06:49:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.324 06:49:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.324 06:49:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.324 06:49:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.324 06:49:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.324 06:49:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:58.583 06:49:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:58.583 Cannot find device "nvmf_tgt_br" 00:18:58.583 06:49:12 -- nvmf/common.sh@154 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.583 Cannot find device "nvmf_tgt_br2" 00:18:58.583 06:49:12 -- nvmf/common.sh@155 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:58.583 06:49:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:58.583 Cannot find device "nvmf_tgt_br" 00:18:58.583 06:49:12 -- nvmf/common.sh@157 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:58.583 Cannot find device "nvmf_tgt_br2" 00:18:58.583 06:49:12 -- nvmf/common.sh@158 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:58.583 06:49:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:58.583 06:49:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.583 06:49:12 -- nvmf/common.sh@161 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.583 06:49:12 -- nvmf/common.sh@162 -- # true 00:18:58.583 06:49:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.583 06:49:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.583 06:49:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.583 06:49:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.583 06:49:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.583 06:49:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.583 06:49:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.583 06:49:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:58.583 06:49:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:58.583 06:49:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:58.583 06:49:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:58.583 06:49:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:58.583 06:49:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:58.583 06:49:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.583 06:49:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.583 06:49:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.583 06:49:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:58.583 06:49:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:58.583 06:49:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.583 06:49:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.583 06:49:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.842 06:49:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.842 06:49:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.842 06:49:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:58.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:18:58.842 00:18:58.842 --- 10.0.0.2 ping statistics --- 00:18:58.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.842 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:58.842 06:49:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:58.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:58.842 00:18:58.842 --- 10.0.0.3 ping statistics --- 00:18:58.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.842 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:58.842 06:49:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:18:58.842 00:18:58.842 --- 10.0.0.1 ping statistics --- 00:18:58.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.842 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:58.842 06:49:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.842 06:49:12 -- nvmf/common.sh@421 -- # return 0 00:18:58.842 06:49:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:58.842 06:49:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.842 06:49:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:58.842 06:49:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:58.842 06:49:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.842 06:49:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:58.842 06:49:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:58.842 06:49:12 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:58.842 06:49:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:58.842 06:49:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.842 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.842 06:49:12 -- nvmf/common.sh@469 -- # nvmfpid=80156 00:18:58.842 06:49:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.842 06:49:12 -- nvmf/common.sh@470 -- # waitforlisten 80156 00:18:58.842 06:49:12 -- common/autotest_common.sh@829 -- # '[' -z 80156 ']' 00:18:58.842 06:49:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.842 06:49:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.842 06:49:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.842 06:49:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.842 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.842 [2024-12-14 06:49:12.696290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:58.842 [2024-12-14 06:49:12.696373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.842 [2024-12-14 06:49:12.831737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.101 [2024-12-14 06:49:12.919527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.101 [2024-12-14 06:49:12.919718] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.101 [2024-12-14 06:49:12.919731] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.101 [2024-12-14 06:49:12.919739] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.101 [2024-12-14 06:49:12.920285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.101 [2024-12-14 06:49:12.920392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.101 [2024-12-14 06:49:12.920939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.101 [2024-12-14 06:49:12.920986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.037 06:49:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.037 06:49:13 -- common/autotest_common.sh@862 -- # return 0 00:19:00.037 06:49:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.037 06:49:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.037 06:49:13 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 [2024-12-14 06:49:13.747286] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@21 -- # seq 1 11 00:19:00.037 06:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.037 06:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 Malloc1 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 [2024-12-14 06:49:13.841242] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.037 06:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 Malloc2 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.037 06:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 Malloc3 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.037 06:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 Malloc4 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:00.037 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:00.037 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.037 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.037 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.037 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:00.037 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.037 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 Malloc5 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.297 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 Malloc6 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.297 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 Malloc7 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.297 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 Malloc8 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.297 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.297 Malloc9 00:19:00.297 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.297 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:00.297 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.297 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.556 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 Malloc10 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.556 06:49:14 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:00.556 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.556 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.556 Malloc11 00:19:00.556 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.556 06:49:14 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:00.557 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.557 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.557 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.557 06:49:14 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:00.557 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.557 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.557 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.557 06:49:14 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:19:00.557 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.557 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.557 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.557 06:49:14 -- target/multiconnection.sh@28 -- # seq 1 11 00:19:00.557 06:49:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.557 06:49:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.816 06:49:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:00.816 06:49:14 -- common/autotest_common.sh@1187 -- # local i=0 00:19:00.816 06:49:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.816 06:49:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:00.816 06:49:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:02.717 06:49:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:02.717 06:49:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:02.717 06:49:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:19:02.717 06:49:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:02.717 06:49:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.717 06:49:16 -- common/autotest_common.sh@1197 -- # return 0 00:19:02.717 06:49:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.717 06:49:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:19:02.976 06:49:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:02.976 06:49:16 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.976 06:49:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.976 06:49:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:02.976 06:49:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.879 06:49:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.879 06:49:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.879 06:49:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:19:04.879 06:49:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:04.879 06:49:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.879 06:49:18 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.879 06:49:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.879 06:49:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:19:05.138 06:49:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:05.138 06:49:18 -- common/autotest_common.sh@1187 -- # local i=0 00:19:05.138 06:49:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.138 06:49:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:05.138 06:49:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:07.066 06:49:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:07.066 06:49:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:07.066 06:49:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:19:07.066 06:49:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:07.066 06:49:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.066 06:49:21 -- common/autotest_common.sh@1197 -- # return 0 00:19:07.066 06:49:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.066 06:49:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:07.325 06:49:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:07.325 06:49:21 -- common/autotest_common.sh@1187 -- # local i=0 00:19:07.325 06:49:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.325 06:49:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:07.325 06:49:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:09.227 06:49:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:09.227 06:49:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:09.227 06:49:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:19:09.227 06:49:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:09.227 06:49:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.227 06:49:23 -- common/autotest_common.sh@1197 -- # return 0 00:19:09.227 06:49:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.227 06:49:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:09.486 06:49:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:09.486 06:49:23 -- common/autotest_common.sh@1187 -- # local i=0 00:19:09.486 06:49:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.486 06:49:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:09.486 06:49:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:12.017 06:49:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:12.017 06:49:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:12.017 06:49:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:19:12.017 06:49:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:12.017 06:49:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.017 06:49:25 -- common/autotest_common.sh@1197 -- # return 0 00:19:12.017 06:49:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:12.017 06:49:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:12.017 06:49:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:12.017 06:49:25 -- common/autotest_common.sh@1187 -- # local i=0 00:19:12.017 06:49:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.017 06:49:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:12.017 06:49:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:13.921 06:49:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:13.921 06:49:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:13.921 06:49:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:19:13.921 06:49:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:13.921 06:49:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.921 06:49:27 -- common/autotest_common.sh@1197 -- # return 0 00:19:13.921 06:49:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.921 06:49:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:13.921 06:49:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:13.921 06:49:27 -- common/autotest_common.sh@1187 -- # local i=0 00:19:13.921 06:49:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.921 06:49:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:13.921 06:49:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:15.825 06:49:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:15.825 06:49:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:15.825 06:49:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:19:15.825 06:49:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:15.826 06:49:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.826 06:49:29 -- common/autotest_common.sh@1197 -- # return 0 00:19:15.826 06:49:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.826 06:49:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:16.085 06:49:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:16.085 06:49:29 -- common/autotest_common.sh@1187 -- # local i=0 00:19:16.085 06:49:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.085 06:49:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:16.085 06:49:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:18.619 06:49:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:18.619 06:49:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:18.619 06:49:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:19:18.619 06:49:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:18.619 06:49:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.619 06:49:32 -- common/autotest_common.sh@1197 -- # return 0 00:19:18.619 06:49:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.619 06:49:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:18.619 06:49:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:18.619 06:49:32 -- common/autotest_common.sh@1187 -- # local i=0 00:19:18.619 06:49:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.619 06:49:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:18.619 06:49:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:20.523 06:49:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:20.523 06:49:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:20.523 06:49:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:19:20.523 06:49:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:20.523 06:49:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.523 06:49:34 -- common/autotest_common.sh@1197 -- # return 0 00:19:20.523 06:49:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:20.523 06:49:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:20.523 06:49:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:20.523 06:49:34 -- common/autotest_common.sh@1187 -- # local i=0 00:19:20.523 06:49:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.523 06:49:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:20.523 06:49:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:23.096 06:49:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:23.096 06:49:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:23.096 06:49:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:19:23.096 06:49:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:23.096 06:49:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.096 06:49:36 -- common/autotest_common.sh@1197 -- # return 0 00:19:23.096 06:49:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:23.096 06:49:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:23.096 06:49:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:23.096 06:49:36 -- common/autotest_common.sh@1187 -- # local i=0 00:19:23.096 06:49:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.096 06:49:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:23.096 06:49:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:25.001 06:49:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:25.001 06:49:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:25.001 06:49:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:25.001 06:49:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:25.001 06:49:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.001 06:49:38 -- common/autotest_common.sh@1197 -- # return 0 00:19:25.001 06:49:38 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:25.001 [global] 00:19:25.001 thread=1 00:19:25.001 invalidate=1 00:19:25.001 rw=read 00:19:25.001 time_based=1 00:19:25.001 runtime=10 00:19:25.001 ioengine=libaio 00:19:25.001 direct=1 00:19:25.001 bs=262144 00:19:25.001 iodepth=64 00:19:25.001 norandommap=1 00:19:25.001 numjobs=1 00:19:25.001 00:19:25.001 [job0] 00:19:25.001 filename=/dev/nvme0n1 00:19:25.001 [job1] 00:19:25.001 filename=/dev/nvme10n1 00:19:25.001 [job2] 00:19:25.001 filename=/dev/nvme1n1 00:19:25.001 [job3] 00:19:25.001 filename=/dev/nvme2n1 00:19:25.001 [job4] 00:19:25.001 filename=/dev/nvme3n1 00:19:25.001 [job5] 00:19:25.001 filename=/dev/nvme4n1 00:19:25.001 [job6] 00:19:25.001 filename=/dev/nvme5n1 00:19:25.001 [job7] 00:19:25.001 filename=/dev/nvme6n1 00:19:25.001 [job8] 00:19:25.001 filename=/dev/nvme7n1 00:19:25.001 [job9] 00:19:25.001 filename=/dev/nvme8n1 00:19:25.001 [job10] 00:19:25.001 filename=/dev/nvme9n1 00:19:25.001 Could not set queue depth (nvme0n1) 00:19:25.001 Could not set queue depth (nvme10n1) 00:19:25.001 Could not set queue depth (nvme1n1) 00:19:25.001 Could not set queue depth (nvme2n1) 00:19:25.001 Could not set queue depth (nvme3n1) 00:19:25.001 Could not set queue depth (nvme4n1) 00:19:25.001 Could not set queue depth (nvme5n1) 00:19:25.001 Could not set queue depth (nvme6n1) 00:19:25.001 Could not set queue depth (nvme7n1) 00:19:25.001 Could not set queue depth (nvme8n1) 00:19:25.001 Could not set queue depth (nvme9n1) 00:19:25.260 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:25.260 fio-3.35 00:19:25.260 Starting 11 threads 00:19:37.470 00:19:37.470 job0: (groupid=0, jobs=1): err= 0: pid=80639: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=313, BW=78.4MiB/s (82.2MB/s)(794MiB/10137msec) 00:19:37.470 slat (usec): min=18, max=137937, avg=3102.45, stdev=10918.37 00:19:37.470 clat (msec): min=24, max=342, avg=200.68, stdev=37.99 00:19:37.470 lat (msec): min=24, max=342, avg=203.79, stdev=39.87 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 72], 5.00th=[ 110], 10.00th=[ 171], 20.00th=[ 190], 00:19:37.470 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 209], 00:19:37.470 | 70.00th=[ 215], 80.00th=[ 224], 90.00th=[ 239], 95.00th=[ 247], 00:19:37.470 | 99.00th=[ 266], 99.50th=[ 279], 99.90th=[ 342], 99.95th=[ 342], 00:19:37.470 | 99.99th=[ 342] 00:19:37.470 bw ( KiB/s): min=64000, max=135168, per=4.65%, avg=79660.45, stdev=15557.32, samples=20 00:19:37.470 iops : min= 250, max= 528, avg=311.10, stdev=60.77, samples=20 00:19:37.470 lat (msec) : 50=0.94%, 100=2.27%, 250=92.48%, 500=4.31% 00:19:37.470 cpu : usr=0.14%, sys=1.32%, ctx=570, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=3177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job1: (groupid=0, jobs=1): err= 0: pid=80640: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=614, BW=154MiB/s (161MB/s)(1546MiB/10068msec) 00:19:37.470 slat (usec): min=17, max=84457, avg=1600.55, stdev=5365.77 00:19:37.470 clat (msec): min=26, max=178, avg=102.45, stdev=18.96 00:19:37.470 lat (msec): min=26, max=201, avg=104.05, stdev=19.67 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 49], 5.00th=[ 69], 10.00th=[ 75], 20.00th=[ 89], 00:19:37.470 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 106], 60.00th=[ 109], 00:19:37.470 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 125], 95.00th=[ 130], 00:19:37.470 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 180], 00:19:37.470 | 99.99th=[ 180] 00:19:37.470 bw ( KiB/s): min=128000, max=217088, per=9.18%, avg=157226.74, stdev=23110.51, samples=19 00:19:37.470 iops : min= 500, max= 848, avg=614.11, stdev=90.24, samples=19 00:19:37.470 lat (msec) : 50=1.05%, 100=39.55%, 250=59.40% 00:19:37.470 cpu : usr=0.36%, sys=2.25%, ctx=1253, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=6185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job2: (groupid=0, jobs=1): err= 0: pid=80641: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=311, BW=77.9MiB/s (81.6MB/s)(790MiB/10146msec) 00:19:37.470 slat (usec): min=20, max=93421, avg=3118.11, stdev=10305.44 00:19:37.470 clat (msec): min=16, max=361, avg=202.01, stdev=39.56 00:19:37.470 lat (msec): min=17, max=361, avg=205.12, stdev=41.23 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 33], 5.00th=[ 115], 10.00th=[ 176], 20.00th=[ 192], 00:19:37.470 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 211], 00:19:37.470 | 70.00th=[ 218], 80.00th=[ 228], 90.00th=[ 241], 95.00th=[ 251], 00:19:37.470 | 99.00th=[ 268], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 363], 00:19:37.470 | 99.99th=[ 363] 00:19:37.470 bw ( KiB/s): min=64000, max=141082, per=4.62%, avg=79126.90, stdev=16321.10, samples=20 00:19:37.470 iops : min= 250, max= 551, avg=308.85, stdev=63.79, samples=20 00:19:37.470 lat (msec) : 20=0.38%, 50=1.27%, 100=1.17%, 250=92.15%, 500=5.03% 00:19:37.470 cpu : usr=0.16%, sys=1.38%, ctx=596, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=3160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job3: (groupid=0, jobs=1): err= 0: pid=80642: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=308, BW=77.2MiB/s (81.0MB/s)(783MiB/10142msec) 00:19:37.470 slat (usec): min=21, max=99691, avg=3193.36, stdev=10922.52 00:19:37.470 clat (msec): min=61, max=329, avg=203.63, stdev=35.95 00:19:37.470 lat (msec): min=61, max=343, avg=206.82, stdev=37.75 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 89], 5.00th=[ 111], 10.00th=[ 180], 20.00th=[ 192], 00:19:37.470 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 211], 00:19:37.470 | 70.00th=[ 218], 80.00th=[ 226], 90.00th=[ 239], 95.00th=[ 249], 00:19:37.470 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 330], 00:19:37.470 | 99.99th=[ 330] 00:19:37.470 bw ( KiB/s): min=64512, max=118784, per=4.59%, avg=78533.40, stdev=12130.37, samples=20 00:19:37.470 iops : min= 252, max= 464, avg=306.70, stdev=47.42, samples=20 00:19:37.470 lat (msec) : 100=3.03%, 250=92.98%, 500=3.99% 00:19:37.470 cpu : usr=0.11%, sys=1.06%, ctx=980, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=3132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job4: (groupid=0, jobs=1): err= 0: pid=80643: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=596, BW=149MiB/s (156MB/s)(1502MiB/10082msec) 00:19:37.470 slat (usec): min=16, max=58676, avg=1628.55, stdev=5523.70 00:19:37.470 clat (msec): min=32, max=178, avg=105.56, stdev=14.97 00:19:37.470 lat (msec): min=32, max=181, avg=107.19, stdev=15.72 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 57], 5.00th=[ 85], 10.00th=[ 90], 20.00th=[ 94], 00:19:37.470 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:19:37.470 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 125], 95.00th=[ 130], 00:19:37.470 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 180], 99.95th=[ 180], 00:19:37.470 | 99.99th=[ 180] 00:19:37.470 bw ( KiB/s): min=133876, max=170496, per=8.89%, avg=152136.85, stdev=12058.01, samples=20 00:19:37.470 iops : min= 522, max= 666, avg=594.10, stdev=47.34, samples=20 00:19:37.470 lat (msec) : 50=0.53%, 100=34.38%, 250=65.09% 00:19:37.470 cpu : usr=0.32%, sys=2.31%, ctx=983, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=6009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job5: (groupid=0, jobs=1): err= 0: pid=80644: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=1432, BW=358MiB/s (376MB/s)(3592MiB/10028msec) 00:19:37.470 slat (usec): min=20, max=54399, avg=673.00, stdev=2839.87 00:19:37.470 clat (msec): min=15, max=192, avg=43.93, stdev=23.24 00:19:37.470 lat (msec): min=15, max=192, avg=44.60, stdev=23.52 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 21], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 31], 00:19:37.470 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 41], 00:19:37.470 | 70.00th=[ 44], 80.00th=[ 48], 90.00th=[ 63], 95.00th=[ 106], 00:19:37.470 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 188], 00:19:37.470 | 99.99th=[ 192] 00:19:37.470 bw ( KiB/s): min=124152, max=465920, per=21.38%, avg=366051.00, stdev=127292.12, samples=20 00:19:37.470 iops : min= 484, max= 1820, avg=1429.75, stdev=497.44, samples=20 00:19:37.470 lat (msec) : 20=0.80%, 50=83.45%, 100=10.22%, 250=5.53% 00:19:37.470 cpu : usr=0.44%, sys=4.35%, ctx=3097, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:37.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.470 issued rwts: total=14366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.470 job6: (groupid=0, jobs=1): err= 0: pid=80645: Sat Dec 14 06:49:49 2024 00:19:37.470 read: IOPS=306, BW=76.5MiB/s (80.2MB/s)(776MiB/10144msec) 00:19:37.470 slat (usec): min=16, max=120628, avg=3238.16, stdev=12269.57 00:19:37.470 clat (msec): min=32, max=416, avg=205.49, stdev=37.77 00:19:37.470 lat (msec): min=32, max=416, avg=208.72, stdev=39.96 00:19:37.470 clat percentiles (msec): 00:19:37.470 | 1.00th=[ 56], 5.00th=[ 116], 10.00th=[ 182], 20.00th=[ 194], 00:19:37.470 | 30.00th=[ 199], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 213], 00:19:37.470 | 70.00th=[ 222], 80.00th=[ 232], 90.00th=[ 243], 95.00th=[ 251], 00:19:37.470 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 347], 00:19:37.470 | 99.99th=[ 418] 00:19:37.470 bw ( KiB/s): min=61440, max=132873, per=4.55%, avg=77832.20, stdev=15846.99, samples=20 00:19:37.470 iops : min= 240, max= 519, avg=303.95, stdev=61.92, samples=20 00:19:37.470 lat (msec) : 50=0.45%, 100=1.96%, 250=92.37%, 500=5.22% 00:19:37.470 cpu : usr=0.08%, sys=1.08%, ctx=930, majf=0, minf=4097 00:19:37.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.471 issued rwts: total=3105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.471 job7: (groupid=0, jobs=1): err= 0: pid=80646: Sat Dec 14 06:49:49 2024 00:19:37.471 read: IOPS=1550, BW=388MiB/s (407MB/s)(3884MiB/10018msec) 00:19:37.471 slat (usec): min=16, max=42029, avg=618.60, stdev=2515.22 00:19:37.471 clat (msec): min=16, max=147, avg=40.59, stdev=13.85 00:19:37.471 lat (msec): min=16, max=147, avg=41.21, stdev=13.98 00:19:37.471 clat percentiles (msec): 00:19:37.471 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 32], 00:19:37.471 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 42], 00:19:37.471 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 52], 95.00th=[ 57], 00:19:37.471 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 140], 00:19:37.471 | 99.99th=[ 148] 00:19:37.471 bw ( KiB/s): min=145699, max=493056, per=23.15%, avg=396388.90, stdev=81129.66, samples=20 00:19:37.471 iops : min= 569, max= 1926, avg=1548.15, stdev=316.97, samples=20 00:19:37.471 lat (msec) : 20=0.92%, 50=87.78%, 100=9.70%, 250=1.60% 00:19:37.471 cpu : usr=0.42%, sys=4.73%, ctx=3323, majf=0, minf=4097 00:19:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.471 issued rwts: total=15535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.471 job8: (groupid=0, jobs=1): err= 0: pid=80647: Sat Dec 14 06:49:49 2024 00:19:37.471 read: IOPS=305, BW=76.4MiB/s (80.1MB/s)(774MiB/10134msec) 00:19:37.471 slat (usec): min=19, max=133339, avg=3177.35, stdev=10660.39 00:19:37.471 clat (msec): min=24, max=332, avg=205.84, stdev=37.96 00:19:37.471 lat (msec): min=24, max=340, avg=209.02, stdev=39.84 00:19:37.471 clat percentiles (msec): 00:19:37.471 | 1.00th=[ 86], 5.00th=[ 114], 10.00th=[ 176], 20.00th=[ 194], 00:19:37.471 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 215], 00:19:37.471 | 70.00th=[ 222], 80.00th=[ 230], 90.00th=[ 243], 95.00th=[ 253], 00:19:37.471 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 321], 00:19:37.471 | 99.99th=[ 334] 00:19:37.471 bw ( KiB/s): min=59904, max=125952, per=4.53%, avg=77639.00, stdev=14034.39, samples=20 00:19:37.471 iops : min= 234, max= 492, avg=303.20, stdev=54.83, samples=20 00:19:37.471 lat (msec) : 50=0.77%, 100=2.91%, 250=89.70%, 500=6.62% 00:19:37.471 cpu : usr=0.23%, sys=1.21%, ctx=686, majf=0, minf=4097 00:19:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.471 issued rwts: total=3097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.471 job9: (groupid=0, jobs=1): err= 0: pid=80648: Sat Dec 14 06:49:49 2024 00:19:37.471 read: IOPS=637, BW=159MiB/s (167MB/s)(1604MiB/10070msec) 00:19:37.471 slat (usec): min=21, max=71224, avg=1554.59, stdev=5568.59 00:19:37.471 clat (msec): min=21, max=186, avg=98.67, stdev=24.18 00:19:37.471 lat (msec): min=21, max=191, avg=100.22, stdev=24.95 00:19:37.471 clat percentiles (msec): 00:19:37.471 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 65], 20.00th=[ 85], 00:19:37.471 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 108], 00:19:37.471 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 129], 00:19:37.471 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 167], 00:19:37.471 | 99.99th=[ 186] 00:19:37.471 bw ( KiB/s): min=124928, max=339110, per=9.50%, avg=162672.75, stdev=44636.37, samples=20 00:19:37.471 iops : min= 488, max= 1324, avg=635.35, stdev=174.22, samples=20 00:19:37.471 lat (msec) : 50=7.37%, 100=32.82%, 250=59.81% 00:19:37.471 cpu : usr=0.26%, sys=2.43%, ctx=1112, majf=0, minf=4097 00:19:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.471 issued rwts: total=6417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.471 job10: (groupid=0, jobs=1): err= 0: pid=80649: Sat Dec 14 06:49:49 2024 00:19:37.471 read: IOPS=362, BW=90.5MiB/s (94.9MB/s)(919MiB/10145msec) 00:19:37.471 slat (usec): min=20, max=158113, avg=2692.30, stdev=10119.13 00:19:37.471 clat (msec): min=29, max=372, avg=173.67, stdev=65.40 00:19:37.471 lat (msec): min=30, max=399, avg=176.36, stdev=66.93 00:19:37.471 clat percentiles (msec): 00:19:37.471 | 1.00th=[ 57], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 101], 00:19:37.471 | 30.00th=[ 120], 40.00th=[ 138], 50.00th=[ 203], 60.00th=[ 211], 00:19:37.471 | 70.00th=[ 220], 80.00th=[ 230], 90.00th=[ 245], 95.00th=[ 253], 00:19:37.471 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 363], 00:19:37.471 | 99.99th=[ 372] 00:19:37.471 bw ( KiB/s): min=63872, max=182419, per=5.40%, avg=92390.05, stdev=38061.44, samples=20 00:19:37.471 iops : min= 249, max= 712, avg=360.75, stdev=148.52, samples=20 00:19:37.471 lat (msec) : 50=0.60%, 100=19.05%, 250=73.65%, 500=6.70% 00:19:37.471 cpu : usr=0.10%, sys=1.35%, ctx=1042, majf=0, minf=4097 00:19:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:37.471 issued rwts: total=3674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:37.471 00:19:37.471 Run status group 0 (all jobs): 00:19:37.471 READ: bw=1672MiB/s (1753MB/s), 76.4MiB/s-388MiB/s (80.1MB/s-407MB/s), io=16.6GiB (17.8GB), run=10018-10146msec 00:19:37.471 00:19:37.471 Disk stats (read/write): 00:19:37.471 nvme0n1: ios=6181/0, merge=0/0, ticks=1231397/0, in_queue=1231397, util=97.42% 00:19:37.471 nvme10n1: ios=12222/0, merge=0/0, ticks=1236248/0, in_queue=1236248, util=97.54% 00:19:37.471 nvme1n1: ios=6198/0, merge=0/0, ticks=1234587/0, in_queue=1234587, util=97.99% 00:19:37.471 nvme2n1: ios=6137/0, merge=0/0, ticks=1231024/0, in_queue=1231024, util=97.78% 00:19:37.471 nvme3n1: ios=11890/0, merge=0/0, ticks=1236894/0, in_queue=1236894, util=97.73% 00:19:37.471 nvme4n1: ios=28552/0, merge=0/0, ticks=1226033/0, in_queue=1226033, util=98.02% 00:19:37.471 nvme5n1: ios=6086/0, merge=0/0, ticks=1234867/0, in_queue=1234867, util=98.04% 00:19:37.471 nvme6n1: ios=30942/0, merge=0/0, ticks=1219042/0, in_queue=1219042, util=98.09% 00:19:37.471 nvme7n1: ios=6055/0, merge=0/0, ticks=1235060/0, in_queue=1235060, util=98.70% 00:19:37.471 nvme8n1: ios=12707/0, merge=0/0, ticks=1236982/0, in_queue=1236982, util=98.76% 00:19:37.471 nvme9n1: ios=7221/0, merge=0/0, ticks=1229078/0, in_queue=1229078, util=98.69% 00:19:37.471 06:49:49 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:37.471 [global] 00:19:37.471 thread=1 00:19:37.471 invalidate=1 00:19:37.471 rw=randwrite 00:19:37.471 time_based=1 00:19:37.471 runtime=10 00:19:37.471 ioengine=libaio 00:19:37.471 direct=1 00:19:37.471 bs=262144 00:19:37.471 iodepth=64 00:19:37.471 norandommap=1 00:19:37.471 numjobs=1 00:19:37.471 00:19:37.471 [job0] 00:19:37.471 filename=/dev/nvme0n1 00:19:37.471 [job1] 00:19:37.471 filename=/dev/nvme10n1 00:19:37.471 [job2] 00:19:37.471 filename=/dev/nvme1n1 00:19:37.471 [job3] 00:19:37.471 filename=/dev/nvme2n1 00:19:37.471 [job4] 00:19:37.471 filename=/dev/nvme3n1 00:19:37.471 [job5] 00:19:37.471 filename=/dev/nvme4n1 00:19:37.471 [job6] 00:19:37.471 filename=/dev/nvme5n1 00:19:37.471 [job7] 00:19:37.471 filename=/dev/nvme6n1 00:19:37.471 [job8] 00:19:37.471 filename=/dev/nvme7n1 00:19:37.471 [job9] 00:19:37.471 filename=/dev/nvme8n1 00:19:37.471 [job10] 00:19:37.471 filename=/dev/nvme9n1 00:19:37.471 Could not set queue depth (nvme0n1) 00:19:37.471 Could not set queue depth (nvme10n1) 00:19:37.471 Could not set queue depth (nvme1n1) 00:19:37.471 Could not set queue depth (nvme2n1) 00:19:37.471 Could not set queue depth (nvme3n1) 00:19:37.471 Could not set queue depth (nvme4n1) 00:19:37.471 Could not set queue depth (nvme5n1) 00:19:37.471 Could not set queue depth (nvme6n1) 00:19:37.471 Could not set queue depth (nvme7n1) 00:19:37.471 Could not set queue depth (nvme8n1) 00:19:37.471 Could not set queue depth (nvme9n1) 00:19:37.471 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:37.471 fio-3.35 00:19:37.471 Starting 11 threads 00:19:47.451 00:19:47.451 job0: (groupid=0, jobs=1): err= 0: pid=80850: Sat Dec 14 06:50:00 2024 00:19:47.451 write: IOPS=825, BW=206MiB/s (216MB/s)(2072MiB/10045msec); 0 zone resets 00:19:47.451 slat (usec): min=15, max=64739, avg=1156.99, stdev=3538.99 00:19:47.451 clat (usec): min=813, max=282944, avg=76378.98, stdev=75399.83 00:19:47.451 lat (usec): min=1025, max=283010, avg=77535.97, stdev=76473.30 00:19:47.451 clat percentiles (msec): 00:19:47.451 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:19:47.451 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 46], 00:19:47.451 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 245], 95.00th=[ 257], 00:19:47.451 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 284], 00:19:47.451 | 99.99th=[ 284] 00:19:47.451 bw ( KiB/s): min=57344, max=390656, per=16.75%, avg=210552.80, stdev=149960.41, samples=20 00:19:47.451 iops : min= 224, max= 1526, avg=822.45, stdev=585.80, samples=20 00:19:47.451 lat (usec) : 1000=0.04% 00:19:47.451 lat (msec) : 2=0.12%, 4=0.42%, 10=0.80%, 20=1.34%, 50=71.94% 00:19:47.451 lat (msec) : 100=8.55%, 250=9.79%, 500=7.01% 00:19:47.451 cpu : usr=1.33%, sys=1.58%, ctx=7259, majf=0, minf=1 00:19:47.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:47.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.451 issued rwts: total=0,8288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.451 job1: (groupid=0, jobs=1): err= 0: pid=80851: Sat Dec 14 06:50:00 2024 00:19:47.451 write: IOPS=266, BW=66.7MiB/s (70.0MB/s)(683MiB/10234msec); 0 zone resets 00:19:47.451 slat (usec): min=18, max=63622, avg=3661.91, stdev=7038.43 00:19:47.451 clat (msec): min=50, max=494, avg=236.04, stdev=35.35 00:19:47.451 lat (msec): min=50, max=494, avg=239.70, stdev=35.02 00:19:47.451 clat percentiles (msec): 00:19:47.451 | 1.00th=[ 155], 5.00th=[ 199], 10.00th=[ 205], 20.00th=[ 213], 00:19:47.451 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 236], 00:19:47.451 | 70.00th=[ 251], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:19:47.451 | 99.00th=[ 388], 99.50th=[ 447], 99.90th=[ 477], 99.95th=[ 493], 00:19:47.451 | 99.99th=[ 493] 00:19:47.451 bw ( KiB/s): min=59392, max=77824, per=5.43%, avg=68281.25, stdev=6147.76, samples=20 00:19:47.451 iops : min= 232, max= 304, avg=266.70, stdev=24.05, samples=20 00:19:47.451 lat (msec) : 100=0.37%, 250=68.88%, 500=30.76% 00:19:47.451 cpu : usr=0.71%, sys=0.78%, ctx=3144, majf=0, minf=1 00:19:47.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:47.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.451 issued rwts: total=0,2731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.451 job2: (groupid=0, jobs=1): err= 0: pid=80863: Sat Dec 14 06:50:00 2024 00:19:47.451 write: IOPS=274, BW=68.6MiB/s (71.9MB/s)(703MiB/10243msec); 0 zone resets 00:19:47.451 slat (usec): min=27, max=98823, avg=3555.79, stdev=6866.75 00:19:47.451 clat (msec): min=7, max=481, avg=229.48, stdev=42.48 00:19:47.451 lat (msec): min=7, max=481, avg=233.04, stdev=42.45 00:19:47.451 clat percentiles (msec): 00:19:47.451 | 1.00th=[ 45], 5.00th=[ 194], 10.00th=[ 203], 20.00th=[ 211], 00:19:47.451 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 226], 60.00th=[ 230], 00:19:47.451 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 279], 00:19:47.451 | 99.00th=[ 376], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 481], 00:19:47.451 | 99.99th=[ 481] 00:19:47.451 bw ( KiB/s): min=61317, max=79872, per=5.59%, avg=70302.45, stdev=5715.58, samples=20 00:19:47.451 iops : min= 239, max= 312, avg=274.55, stdev=22.35, samples=20 00:19:47.451 lat (msec) : 10=0.36%, 20=0.32%, 50=0.57%, 100=0.71%, 250=73.53% 00:19:47.451 lat (msec) : 500=24.51% 00:19:47.451 cpu : usr=0.95%, sys=0.83%, ctx=1290, majf=0, minf=1 00:19:47.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:47.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.451 issued rwts: total=0,2811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.451 job3: (groupid=0, jobs=1): err= 0: pid=80864: Sat Dec 14 06:50:00 2024 00:19:47.451 write: IOPS=278, BW=69.7MiB/s (73.1MB/s)(713MiB/10233msec); 0 zone resets 00:19:47.451 slat (usec): min=18, max=96004, avg=3494.50, stdev=7416.18 00:19:47.451 clat (msec): min=4, max=478, avg=225.90, stdev=72.66 00:19:47.451 lat (msec): min=4, max=478, avg=229.40, stdev=73.34 00:19:47.451 clat percentiles (msec): 00:19:47.451 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 213], 00:19:47.451 | 30.00th=[ 232], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 00:19:47.451 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 284], 00:19:47.451 | 99.00th=[ 363], 99.50th=[ 405], 99.90th=[ 460], 99.95th=[ 481], 00:19:47.451 | 99.99th=[ 481] 00:19:47.451 bw ( KiB/s): min=55296, max=196608, per=5.68%, avg=71385.70, stdev=29856.85, samples=20 00:19:47.451 iops : min= 216, max= 768, avg=278.80, stdev=116.64, samples=20 00:19:47.451 lat (msec) : 10=0.28%, 20=0.56%, 50=10.62%, 100=0.04%, 250=42.76% 00:19:47.451 lat (msec) : 500=45.74% 00:19:47.451 cpu : usr=0.76%, sys=0.81%, ctx=3011, majf=0, minf=1 00:19:47.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:47.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.451 issued rwts: total=0,2853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.451 job4: (groupid=0, jobs=1): err= 0: pid=80865: Sat Dec 14 06:50:00 2024 00:19:47.451 write: IOPS=254, BW=63.6MiB/s (66.7MB/s)(651MiB/10226msec); 0 zone resets 00:19:47.451 slat (usec): min=22, max=98399, avg=3762.87, stdev=7627.16 00:19:47.451 clat (msec): min=59, max=496, avg=247.53, stdev=33.11 00:19:47.451 lat (msec): min=59, max=496, avg=251.29, stdev=32.40 00:19:47.451 clat percentiles (msec): 00:19:47.451 | 1.00th=[ 188], 5.00th=[ 203], 10.00th=[ 213], 20.00th=[ 228], 00:19:47.451 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:19:47.451 | 70.00th=[ 257], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 284], 00:19:47.451 | 99.00th=[ 388], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 498], 00:19:47.451 | 99.99th=[ 498] 00:19:47.452 bw ( KiB/s): min=47616, max=71680, per=5.17%, avg=65009.85, stdev=6514.51, samples=20 00:19:47.452 iops : min= 186, max= 280, avg=253.90, stdev=25.41, samples=20 00:19:47.452 lat (msec) : 100=0.19%, 250=60.43%, 500=39.38% 00:19:47.452 cpu : usr=0.59%, sys=1.01%, ctx=2928, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,2603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job5: (groupid=0, jobs=1): err= 0: pid=80866: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=464, BW=116MiB/s (122MB/s)(1179MiB/10140msec); 0 zone resets 00:19:47.452 slat (usec): min=17, max=15513, avg=2068.04, stdev=3677.51 00:19:47.452 clat (msec): min=2, max=310, avg=135.48, stdev=28.40 00:19:47.452 lat (msec): min=2, max=310, avg=137.55, stdev=28.70 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 39], 5.00th=[ 85], 10.00th=[ 120], 20.00th=[ 124], 00:19:47.452 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 132], 00:19:47.452 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 171], 95.00th=[ 180], 00:19:47.452 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 300], 99.95th=[ 300], 00:19:47.452 | 99.99th=[ 309] 00:19:47.452 bw ( KiB/s): min=90112, max=183808, per=9.47%, avg=119055.15, stdev=20509.09, samples=20 00:19:47.452 iops : min= 352, max= 718, avg=465.05, stdev=80.12, samples=20 00:19:47.452 lat (msec) : 4=0.11%, 10=0.11%, 20=0.02%, 50=1.97%, 100=3.24% 00:19:47.452 lat (msec) : 250=94.08%, 500=0.47% 00:19:47.452 cpu : usr=1.34%, sys=1.23%, ctx=6081, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,4715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job6: (groupid=0, jobs=1): err= 0: pid=80867: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=255, BW=64.0MiB/s (67.1MB/s)(654MiB/10223msec); 0 zone resets 00:19:47.452 slat (usec): min=23, max=82425, avg=3816.04, stdev=7735.67 00:19:47.452 clat (msec): min=65, max=507, avg=246.17, stdev=33.92 00:19:47.452 lat (msec): min=65, max=507, avg=249.98, stdev=33.33 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 186], 5.00th=[ 203], 10.00th=[ 211], 20.00th=[ 226], 00:19:47.452 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:19:47.452 | 70.00th=[ 259], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 284], 00:19:47.452 | 99.00th=[ 376], 99.50th=[ 460], 99.90th=[ 493], 99.95th=[ 506], 00:19:47.452 | 99.99th=[ 506] 00:19:47.452 bw ( KiB/s): min=51302, max=75624, per=5.20%, avg=65347.10, stdev=6647.40, samples=20 00:19:47.452 iops : min= 200, max= 295, avg=255.20, stdev=25.96, samples=20 00:19:47.452 lat (msec) : 100=0.15%, 250=62.84%, 500=36.93%, 750=0.08% 00:19:47.452 cpu : usr=0.82%, sys=0.85%, ctx=1483, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,2616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job7: (groupid=0, jobs=1): err= 0: pid=80868: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=908, BW=227MiB/s (238MB/s)(2282MiB/10049msec); 0 zone resets 00:19:47.452 slat (usec): min=14, max=11582, avg=1076.67, stdev=2224.85 00:19:47.452 clat (usec): min=1885, max=160753, avg=69368.91, stdev=41127.40 00:19:47.452 lat (usec): min=1932, max=163400, avg=70445.58, stdev=41731.17 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:19:47.452 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 51], 00:19:47.452 | 70.00th=[ 54], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 153], 00:19:47.452 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:19:47.452 | 99.99th=[ 161] 00:19:47.452 bw ( KiB/s): min=106496, max=370688, per=18.45%, avg=231946.35, stdev=116847.88, samples=20 00:19:47.452 iops : min= 416, max= 1448, avg=906.00, stdev=456.40, samples=20 00:19:47.452 lat (msec) : 2=0.03%, 4=0.20%, 10=0.55%, 20=0.83%, 50=58.47% 00:19:47.452 lat (msec) : 100=13.18%, 250=26.73% 00:19:47.452 cpu : usr=1.69%, sys=1.85%, ctx=15141, majf=0, minf=2 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,9127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job8: (groupid=0, jobs=1): err= 0: pid=80869: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=268, BW=67.1MiB/s (70.4MB/s)(687MiB/10234msec); 0 zone resets 00:19:47.452 slat (usec): min=20, max=54287, avg=3637.90, stdev=6915.79 00:19:47.452 clat (msec): min=26, max=484, avg=234.59, stdev=36.31 00:19:47.452 lat (msec): min=26, max=485, avg=238.23, stdev=36.06 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 75], 5.00th=[ 197], 10.00th=[ 207], 20.00th=[ 218], 00:19:47.452 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 243], 00:19:47.452 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 259], 95.00th=[ 264], 00:19:47.452 | 99.00th=[ 363], 99.50th=[ 426], 99.90th=[ 468], 99.95th=[ 485], 00:19:47.452 | 99.99th=[ 485] 00:19:47.452 bw ( KiB/s): min=61440, max=75776, per=5.46%, avg=68663.65, stdev=3865.35, samples=20 00:19:47.452 iops : min= 240, max= 296, avg=268.15, stdev=15.08, samples=20 00:19:47.452 lat (msec) : 50=0.66%, 100=0.73%, 250=74.85%, 500=23.77% 00:19:47.452 cpu : usr=0.57%, sys=1.10%, ctx=2642, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,2747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job9: (groupid=0, jobs=1): err= 0: pid=80870: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=455, BW=114MiB/s (119MB/s)(1153MiB/10131msec); 0 zone resets 00:19:47.452 slat (usec): min=21, max=15735, avg=2164.45, stdev=3727.76 00:19:47.452 clat (msec): min=15, max=295, avg=138.36, stdev=22.60 00:19:47.452 lat (msec): min=15, max=295, avg=140.52, stdev=22.64 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 95], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 125], 00:19:47.452 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 132], 00:19:47.452 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 171], 95.00th=[ 180], 00:19:47.452 | 99.00th=[ 192], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 284], 00:19:47.452 | 99.99th=[ 296] 00:19:47.452 bw ( KiB/s): min=90112, max=133120, per=9.26%, avg=116430.80, stdev=14193.83, samples=20 00:19:47.452 iops : min= 352, max= 520, avg=454.80, stdev=55.45, samples=20 00:19:47.452 lat (msec) : 20=0.09%, 50=0.35%, 100=0.80%, 250=98.37%, 500=0.39% 00:19:47.452 cpu : usr=1.24%, sys=1.29%, ctx=6072, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,4612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 job10: (groupid=0, jobs=1): err= 0: pid=80871: Sat Dec 14 06:50:00 2024 00:19:47.452 write: IOPS=711, BW=178MiB/s (186MB/s)(1802MiB/10135msec); 0 zone resets 00:19:47.452 slat (usec): min=22, max=15750, avg=1375.27, stdev=2862.03 00:19:47.452 clat (usec): min=1390, max=306364, avg=88555.52, stdev=54153.63 00:19:47.452 lat (msec): min=2, max=306, avg=89.93, stdev=54.93 00:19:47.452 clat percentiles (msec): 00:19:47.452 | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:19:47.452 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 45], 60.00th=[ 124], 00:19:47.452 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 178], 00:19:47.452 | 99.00th=[ 186], 99.50th=[ 207], 99.90th=[ 284], 99.95th=[ 296], 00:19:47.452 | 99.99th=[ 309] 00:19:47.452 bw ( KiB/s): min=90112, max=411648, per=14.54%, avg=182861.20, stdev=121651.94, samples=20 00:19:47.452 iops : min= 352, max= 1608, avg=714.30, stdev=475.20, samples=20 00:19:47.452 lat (msec) : 2=0.01%, 4=0.03%, 10=0.49%, 20=0.65%, 50=50.69% 00:19:47.452 lat (msec) : 100=3.07%, 250=44.76%, 500=0.31% 00:19:47.452 cpu : usr=2.02%, sys=1.76%, ctx=8775, majf=0, minf=1 00:19:47.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:47.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:47.452 issued rwts: total=0,7207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:47.452 00:19:47.452 Run status group 0 (all jobs): 00:19:47.452 WRITE: bw=1228MiB/s (1288MB/s), 63.6MiB/s-227MiB/s (66.7MB/s-238MB/s), io=12.3GiB (13.2GB), run=10045-10243msec 00:19:47.452 00:19:47.452 Disk stats (read/write): 00:19:47.452 nvme0n1: ios=49/16388, merge=0/0, ticks=26/1215836, in_queue=1215862, util=97.65% 00:19:47.452 nvme10n1: ios=49/5451, merge=0/0, ticks=119/1235456, in_queue=1235575, util=98.24% 00:19:47.452 nvme1n1: ios=23/5618, merge=0/0, ticks=21/1237193, in_queue=1237214, util=98.07% 00:19:47.452 nvme2n1: ios=0/5694, merge=0/0, ticks=0/1234369, in_queue=1234369, util=97.94% 00:19:47.452 nvme3n1: ios=0/5199, merge=0/0, ticks=0/1235637, in_queue=1235637, util=97.98% 00:19:47.452 nvme4n1: ios=0/9295, merge=0/0, ticks=0/1212021, in_queue=1212021, util=98.29% 00:19:47.453 nvme5n1: ios=0/5228, merge=0/0, ticks=0/1234680, in_queue=1234680, util=98.31% 00:19:47.453 nvme6n1: ios=0/18099, merge=0/0, ticks=0/1217725, in_queue=1217725, util=98.52% 00:19:47.453 nvme7n1: ios=0/5488, merge=0/0, ticks=0/1235965, in_queue=1235965, util=98.70% 00:19:47.453 nvme8n1: ios=0/9075, merge=0/0, ticks=0/1208662, in_queue=1208662, util=98.71% 00:19:47.453 nvme9n1: ios=0/14275, merge=0/0, ticks=0/1209734, in_queue=1209734, util=98.89% 00:19:47.453 06:50:00 -- target/multiconnection.sh@36 -- # sync 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:47.453 06:50:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:47.453 06:50:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:47.453 06:50:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:47.453 06:50:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:47.453 06:50:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:47.453 06:50:01 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:47.453 06:50:01 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:47.453 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:47.453 06:50:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:47.453 06:50:01 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:47.453 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:47.453 06:50:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:47.453 06:50:01 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:47.453 06:50:01 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:47.453 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:47.453 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:47.453 06:50:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:47.453 06:50:01 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.453 06:50:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:47.453 06:50:01 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.453 06:50:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:47.453 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.453 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.453 06:50:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.453 06:50:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:47.712 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:47.712 06:50:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:47.712 06:50:01 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.712 06:50:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.712 06:50:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:47.712 06:50:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:47.712 06:50:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.712 06:50:01 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.712 06:50:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:47.712 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.712 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.712 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.712 06:50:01 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:47.712 06:50:01 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:47.712 06:50:01 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:47.712 06:50:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.712 06:50:01 -- nvmf/common.sh@116 -- # sync 00:19:47.712 06:50:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.712 06:50:01 -- nvmf/common.sh@119 -- # set +e 00:19:47.712 06:50:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.712 06:50:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.712 rmmod nvme_tcp 00:19:47.712 rmmod nvme_fabrics 00:19:47.712 rmmod nvme_keyring 00:19:47.712 06:50:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.712 06:50:01 -- nvmf/common.sh@123 -- # set -e 00:19:47.712 06:50:01 -- nvmf/common.sh@124 -- # return 0 00:19:47.712 06:50:01 -- nvmf/common.sh@477 -- # '[' -n 80156 ']' 00:19:47.712 06:50:01 -- nvmf/common.sh@478 -- # killprocess 80156 00:19:47.712 06:50:01 -- common/autotest_common.sh@936 -- # '[' -z 80156 ']' 00:19:47.712 06:50:01 -- common/autotest_common.sh@940 -- # kill -0 80156 00:19:47.712 06:50:01 -- common/autotest_common.sh@941 -- # uname 00:19:47.712 06:50:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.712 06:50:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80156 00:19:47.712 killing process with pid 80156 00:19:47.712 06:50:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.712 06:50:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.712 06:50:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80156' 00:19:47.712 06:50:01 -- common/autotest_common.sh@955 -- # kill 80156 00:19:47.712 06:50:01 -- common/autotest_common.sh@960 -- # wait 80156 00:19:48.647 06:50:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.647 06:50:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.647 06:50:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.647 06:50:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.647 06:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.647 06:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.647 06:50:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:48.647 00:19:48.647 real 0m50.255s 00:19:48.647 user 2m46.473s 00:19:48.647 sys 0m25.579s 00:19:48.647 06:50:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:48.647 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:48.647 ************************************ 00:19:48.647 END TEST nvmf_multiconnection 00:19:48.647 ************************************ 00:19:48.647 06:50:02 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:48.647 06:50:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.647 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:48.647 ************************************ 00:19:48.647 START TEST nvmf_initiator_timeout 00:19:48.647 ************************************ 00:19:48.647 06:50:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:48.647 * Looking for test storage... 00:19:48.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:48.647 06:50:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:48.647 06:50:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:48.647 06:50:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:48.647 06:50:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:48.647 06:50:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:48.647 06:50:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:48.647 06:50:02 -- scripts/common.sh@335 -- # IFS=.-: 00:19:48.647 06:50:02 -- scripts/common.sh@335 -- # read -ra ver1 00:19:48.647 06:50:02 -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.647 06:50:02 -- scripts/common.sh@336 -- # read -ra ver2 00:19:48.647 06:50:02 -- scripts/common.sh@337 -- # local 'op=<' 00:19:48.647 06:50:02 -- scripts/common.sh@339 -- # ver1_l=2 00:19:48.647 06:50:02 -- scripts/common.sh@340 -- # ver2_l=1 00:19:48.647 06:50:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:48.647 06:50:02 -- scripts/common.sh@343 -- # case "$op" in 00:19:48.647 06:50:02 -- scripts/common.sh@344 -- # : 1 00:19:48.647 06:50:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:48.647 06:50:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.647 06:50:02 -- scripts/common.sh@364 -- # decimal 1 00:19:48.647 06:50:02 -- scripts/common.sh@352 -- # local d=1 00:19:48.647 06:50:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.647 06:50:02 -- scripts/common.sh@354 -- # echo 1 00:19:48.647 06:50:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:48.647 06:50:02 -- scripts/common.sh@365 -- # decimal 2 00:19:48.647 06:50:02 -- scripts/common.sh@352 -- # local d=2 00:19:48.647 06:50:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.647 06:50:02 -- scripts/common.sh@354 -- # echo 2 00:19:48.647 06:50:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:48.647 06:50:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:48.647 06:50:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:48.647 06:50:02 -- scripts/common.sh@367 -- # return 0 00:19:48.647 06:50:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.647 --rc genhtml_branch_coverage=1 00:19:48.647 --rc genhtml_function_coverage=1 00:19:48.647 --rc genhtml_legend=1 00:19:48.647 --rc geninfo_all_blocks=1 00:19:48.647 --rc geninfo_unexecuted_blocks=1 00:19:48.647 00:19:48.647 ' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.647 --rc genhtml_branch_coverage=1 00:19:48.647 --rc genhtml_function_coverage=1 00:19:48.647 --rc genhtml_legend=1 00:19:48.647 --rc geninfo_all_blocks=1 00:19:48.647 --rc geninfo_unexecuted_blocks=1 00:19:48.647 00:19:48.647 ' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.647 --rc genhtml_branch_coverage=1 00:19:48.647 --rc genhtml_function_coverage=1 00:19:48.647 --rc genhtml_legend=1 00:19:48.647 --rc geninfo_all_blocks=1 00:19:48.647 --rc geninfo_unexecuted_blocks=1 00:19:48.647 00:19:48.647 ' 00:19:48.647 06:50:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.647 --rc genhtml_branch_coverage=1 00:19:48.647 --rc genhtml_function_coverage=1 00:19:48.647 --rc genhtml_legend=1 00:19:48.647 --rc geninfo_all_blocks=1 00:19:48.647 --rc geninfo_unexecuted_blocks=1 00:19:48.647 00:19:48.647 ' 00:19:48.647 06:50:02 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.647 06:50:02 -- nvmf/common.sh@7 -- # uname -s 00:19:48.647 06:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.647 06:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.647 06:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.647 06:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.647 06:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.647 06:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.647 06:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.647 06:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.647 06:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.647 06:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:19:48.647 06:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:19:48.647 06:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.647 06:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.647 06:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.647 06:50:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.647 06:50:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.647 06:50:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.647 06:50:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.647 06:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.647 06:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.647 06:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.647 06:50:02 -- paths/export.sh@5 -- # export PATH 00:19:48.647 06:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.647 06:50:02 -- nvmf/common.sh@46 -- # : 0 00:19:48.647 06:50:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.647 06:50:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.647 06:50:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.647 06:50:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.647 06:50:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.647 06:50:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.647 06:50:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.647 06:50:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.647 06:50:02 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.647 06:50:02 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.647 06:50:02 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:48.647 06:50:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.647 06:50:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.647 06:50:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.647 06:50:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.647 06:50:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.647 06:50:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.647 06:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.647 06:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.647 06:50:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:48.647 06:50:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:48.648 06:50:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:48.648 06:50:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:48.648 06:50:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.648 06:50:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.648 06:50:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:48.648 06:50:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:48.648 06:50:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.648 06:50:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.648 06:50:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.648 06:50:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.648 06:50:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.648 06:50:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.648 06:50:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.648 06:50:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.648 06:50:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:48.648 06:50:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:48.906 Cannot find device "nvmf_tgt_br" 00:19:48.906 06:50:02 -- nvmf/common.sh@154 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.906 Cannot find device "nvmf_tgt_br2" 00:19:48.906 06:50:02 -- nvmf/common.sh@155 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:48.906 06:50:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:48.906 Cannot find device "nvmf_tgt_br" 00:19:48.906 06:50:02 -- nvmf/common.sh@157 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:48.906 Cannot find device "nvmf_tgt_br2" 00:19:48.906 06:50:02 -- nvmf/common.sh@158 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:48.906 06:50:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:48.906 06:50:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.906 06:50:02 -- nvmf/common.sh@161 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.906 06:50:02 -- nvmf/common.sh@162 -- # true 00:19:48.906 06:50:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.906 06:50:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.906 06:50:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.906 06:50:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.906 06:50:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.906 06:50:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.906 06:50:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.906 06:50:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.906 06:50:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.906 06:50:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:48.906 06:50:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:48.906 06:50:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:48.906 06:50:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:48.906 06:50:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.906 06:50:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.906 06:50:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.906 06:50:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:48.906 06:50:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:48.906 06:50:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.165 06:50:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.165 06:50:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.165 06:50:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.165 06:50:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.165 06:50:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:49.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:19:49.165 00:19:49.165 --- 10.0.0.2 ping statistics --- 00:19:49.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.165 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:49.165 06:50:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:49.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:19:49.165 00:19:49.165 --- 10.0.0.3 ping statistics --- 00:19:49.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.165 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:49.165 06:50:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:19:49.165 00:19:49.165 --- 10.0.0.1 ping statistics --- 00:19:49.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.165 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:49.165 06:50:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.165 06:50:02 -- nvmf/common.sh@421 -- # return 0 00:19:49.165 06:50:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.165 06:50:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.165 06:50:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.165 06:50:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.165 06:50:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.165 06:50:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.165 06:50:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.165 06:50:02 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:49.165 06:50:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.165 06:50:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.165 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.165 06:50:02 -- nvmf/common.sh@469 -- # nvmfpid=81248 00:19:49.165 06:50:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.165 06:50:02 -- nvmf/common.sh@470 -- # waitforlisten 81248 00:19:49.165 06:50:02 -- common/autotest_common.sh@829 -- # '[' -z 81248 ']' 00:19:49.165 06:50:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.165 06:50:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.165 06:50:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.165 06:50:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.165 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.165 [2024-12-14 06:50:03.047820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.165 [2024-12-14 06:50:03.047937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.423 [2024-12-14 06:50:03.191803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.423 [2024-12-14 06:50:03.299341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.424 [2024-12-14 06:50:03.299561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.424 [2024-12-14 06:50:03.299579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.424 [2024-12-14 06:50:03.299591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.424 [2024-12-14 06:50:03.300062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.424 [2024-12-14 06:50:03.300140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.424 [2024-12-14 06:50:03.300316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.424 [2024-12-14 06:50:03.300321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.360 06:50:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.360 06:50:03 -- common/autotest_common.sh@862 -- # return 0 00:19:50.360 06:50:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:50.360 06:50:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.360 06:50:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 06:50:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 Malloc0 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 Delay0 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 [2024-12-14 06:50:04.110856] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.360 06:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.360 06:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.360 [2024-12-14 06:50:04.143086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.360 06:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:50.360 06:50:04 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:50.360 06:50:04 -- common/autotest_common.sh@1187 -- # local i=0 00:19:50.360 06:50:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.360 06:50:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:50.360 06:50:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:52.895 06:50:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:52.895 06:50:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:52.895 06:50:06 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:52.895 06:50:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:52.895 06:50:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.895 06:50:06 -- common/autotest_common.sh@1197 -- # return 0 00:19:52.895 06:50:06 -- target/initiator_timeout.sh@35 -- # fio_pid=81330 00:19:52.895 06:50:06 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:52.895 06:50:06 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:52.895 [global] 00:19:52.895 thread=1 00:19:52.895 invalidate=1 00:19:52.895 rw=write 00:19:52.895 time_based=1 00:19:52.895 runtime=60 00:19:52.895 ioengine=libaio 00:19:52.895 direct=1 00:19:52.895 bs=4096 00:19:52.895 iodepth=1 00:19:52.895 norandommap=0 00:19:52.895 numjobs=1 00:19:52.895 00:19:52.895 verify_dump=1 00:19:52.895 verify_backlog=512 00:19:52.895 verify_state_save=0 00:19:52.895 do_verify=1 00:19:52.895 verify=crc32c-intel 00:19:52.895 [job0] 00:19:52.895 filename=/dev/nvme0n1 00:19:52.895 Could not set queue depth (nvme0n1) 00:19:52.895 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.895 fio-3.35 00:19:52.895 Starting 1 thread 00:19:55.427 06:50:09 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:55.427 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.427 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 true 00:19:55.427 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.427 06:50:09 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:55.427 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.427 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 true 00:19:55.427 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.427 06:50:09 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:55.427 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.427 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 true 00:19:55.427 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.427 06:50:09 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:55.427 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.427 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 true 00:19:55.427 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.427 06:50:09 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:58.712 06:50:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.712 06:50:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.712 true 00:19:58.712 06:50:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:58.712 06:50:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.712 06:50:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.712 true 00:19:58.712 06:50:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:58.712 06:50:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.712 06:50:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.712 true 00:19:58.712 06:50:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:58.712 06:50:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.712 06:50:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.712 true 00:19:58.712 06:50:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:58.712 06:50:12 -- target/initiator_timeout.sh@54 -- # wait 81330 00:20:54.956 00:20:54.956 job0: (groupid=0, jobs=1): err= 0: pid=81351: Sat Dec 14 06:51:06 2024 00:20:54.956 read: IOPS=728, BW=2912KiB/s (2982kB/s)(171MiB/60001msec) 00:20:54.956 slat (usec): min=11, max=11600, avg=18.07, stdev=65.40 00:20:54.956 clat (usec): min=33, max=40666k, avg=1150.81, stdev=194555.51 00:20:54.956 lat (usec): min=170, max=40666k, avg=1168.88, stdev=194555.58 00:20:54.956 clat percentiles (usec): 00:20:54.956 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:20:54.956 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 221], 00:20:54.956 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:20:54.956 | 99.00th=[ 343], 99.50th=[ 371], 99.90th=[ 562], 99.95th=[ 734], 00:20:54.956 | 99.99th=[ 2671] 00:20:54.956 write: IOPS=733, BW=2935KiB/s (3006kB/s)(172MiB/60001msec); 0 zone resets 00:20:54.956 slat (usec): min=17, max=823, avg=26.35, stdev=11.04 00:20:54.956 clat (usec): min=112, max=3095, avg=172.78, stdev=45.50 00:20:54.956 lat (usec): min=144, max=3123, avg=199.13, stdev=47.11 00:20:54.956 clat percentiles (usec): 00:20:54.956 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:20:54.956 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 174], 00:20:54.956 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 223], 95.00th=[ 239], 00:20:54.956 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 474], 99.95th=[ 652], 00:20:54.956 | 99.99th=[ 1483] 00:20:54.956 bw ( KiB/s): min= 2448, max=12288, per=100.00%, avg=8781.90, stdev=1846.99, samples=39 00:20:54.956 iops : min= 612, max= 3072, avg=2195.46, stdev=461.75, samples=39 00:20:54.956 lat (usec) : 50=0.01%, 250=88.07%, 500=11.81%, 750=0.08%, 1000=0.02% 00:20:54.956 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:54.956 cpu : usr=0.59%, sys=2.33%, ctx=87732, majf=0, minf=5 00:20:54.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.956 issued rwts: total=43688,44032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.956 00:20:54.956 Run status group 0 (all jobs): 00:20:54.956 READ: bw=2912KiB/s (2982kB/s), 2912KiB/s-2912KiB/s (2982kB/s-2982kB/s), io=171MiB (179MB), run=60001-60001msec 00:20:54.956 WRITE: bw=2935KiB/s (3006kB/s), 2935KiB/s-2935KiB/s (3006kB/s-3006kB/s), io=172MiB (180MB), run=60001-60001msec 00:20:54.956 00:20:54.956 Disk stats (read/write): 00:20:54.956 nvme0n1: ios=43744/43637, merge=0/0, ticks=10115/8187, in_queue=18302, util=99.89% 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:54.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:54.956 06:51:06 -- common/autotest_common.sh@1208 -- # local i=0 00:20:54.956 06:51:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:54.956 06:51:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:54.956 06:51:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:54.956 06:51:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:54.956 06:51:06 -- common/autotest_common.sh@1220 -- # return 0 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:54.956 nvmf hotplug test: fio successful as expected 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.956 06:51:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.956 06:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:54.956 06:51:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:54.956 06:51:06 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:54.956 06:51:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:54.956 06:51:06 -- nvmf/common.sh@116 -- # sync 00:20:54.956 06:51:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:54.956 06:51:06 -- nvmf/common.sh@119 -- # set +e 00:20:54.956 06:51:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:54.956 06:51:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:54.956 rmmod nvme_tcp 00:20:54.956 rmmod nvme_fabrics 00:20:54.956 rmmod nvme_keyring 00:20:54.956 06:51:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:54.956 06:51:06 -- nvmf/common.sh@123 -- # set -e 00:20:54.956 06:51:06 -- nvmf/common.sh@124 -- # return 0 00:20:54.956 06:51:06 -- nvmf/common.sh@477 -- # '[' -n 81248 ']' 00:20:54.956 06:51:06 -- nvmf/common.sh@478 -- # killprocess 81248 00:20:54.956 06:51:06 -- common/autotest_common.sh@936 -- # '[' -z 81248 ']' 00:20:54.956 06:51:06 -- common/autotest_common.sh@940 -- # kill -0 81248 00:20:54.956 06:51:06 -- common/autotest_common.sh@941 -- # uname 00:20:54.956 06:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:54.956 06:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81248 00:20:54.956 killing process with pid 81248 00:20:54.956 06:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:54.956 06:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:54.956 06:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81248' 00:20:54.956 06:51:06 -- common/autotest_common.sh@955 -- # kill 81248 00:20:54.956 06:51:06 -- common/autotest_common.sh@960 -- # wait 81248 00:20:54.956 06:51:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:54.956 06:51:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:54.956 06:51:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:54.956 06:51:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.956 06:51:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:54.956 06:51:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.956 06:51:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.957 06:51:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.957 06:51:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:54.957 00:20:54.957 real 1m4.858s 00:20:54.957 user 4m5.939s 00:20:54.957 sys 0m9.065s 00:20:54.957 06:51:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:54.957 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.957 ************************************ 00:20:54.957 END TEST nvmf_initiator_timeout 00:20:54.957 ************************************ 00:20:54.957 06:51:07 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:54.957 06:51:07 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:54.957 06:51:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.957 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.957 06:51:07 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:54.957 06:51:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.957 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.957 06:51:07 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:54.957 06:51:07 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:54.957 06:51:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.957 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.957 ************************************ 00:20:54.957 START TEST nvmf_multicontroller 00:20:54.957 ************************************ 00:20:54.957 06:51:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:54.957 * Looking for test storage... 00:20:54.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.957 06:51:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:54.957 06:51:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:54.957 06:51:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:54.957 06:51:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:54.957 06:51:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:54.957 06:51:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:54.957 06:51:07 -- scripts/common.sh@335 -- # IFS=.-: 00:20:54.957 06:51:07 -- scripts/common.sh@335 -- # read -ra ver1 00:20:54.957 06:51:07 -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.957 06:51:07 -- scripts/common.sh@336 -- # read -ra ver2 00:20:54.957 06:51:07 -- scripts/common.sh@337 -- # local 'op=<' 00:20:54.957 06:51:07 -- scripts/common.sh@339 -- # ver1_l=2 00:20:54.957 06:51:07 -- scripts/common.sh@340 -- # ver2_l=1 00:20:54.957 06:51:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:54.957 06:51:07 -- scripts/common.sh@343 -- # case "$op" in 00:20:54.957 06:51:07 -- scripts/common.sh@344 -- # : 1 00:20:54.957 06:51:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:54.957 06:51:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.957 06:51:07 -- scripts/common.sh@364 -- # decimal 1 00:20:54.957 06:51:07 -- scripts/common.sh@352 -- # local d=1 00:20:54.957 06:51:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.957 06:51:07 -- scripts/common.sh@354 -- # echo 1 00:20:54.957 06:51:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:54.957 06:51:07 -- scripts/common.sh@365 -- # decimal 2 00:20:54.957 06:51:07 -- scripts/common.sh@352 -- # local d=2 00:20:54.957 06:51:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.957 06:51:07 -- scripts/common.sh@354 -- # echo 2 00:20:54.957 06:51:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:54.957 06:51:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:54.957 06:51:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:54.957 06:51:07 -- scripts/common.sh@367 -- # return 0 00:20:54.957 06:51:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:54.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.957 --rc genhtml_branch_coverage=1 00:20:54.957 --rc genhtml_function_coverage=1 00:20:54.957 --rc genhtml_legend=1 00:20:54.957 --rc geninfo_all_blocks=1 00:20:54.957 --rc geninfo_unexecuted_blocks=1 00:20:54.957 00:20:54.957 ' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:54.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.957 --rc genhtml_branch_coverage=1 00:20:54.957 --rc genhtml_function_coverage=1 00:20:54.957 --rc genhtml_legend=1 00:20:54.957 --rc geninfo_all_blocks=1 00:20:54.957 --rc geninfo_unexecuted_blocks=1 00:20:54.957 00:20:54.957 ' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:54.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.957 --rc genhtml_branch_coverage=1 00:20:54.957 --rc genhtml_function_coverage=1 00:20:54.957 --rc genhtml_legend=1 00:20:54.957 --rc geninfo_all_blocks=1 00:20:54.957 --rc geninfo_unexecuted_blocks=1 00:20:54.957 00:20:54.957 ' 00:20:54.957 06:51:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:54.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.957 --rc genhtml_branch_coverage=1 00:20:54.957 --rc genhtml_function_coverage=1 00:20:54.957 --rc genhtml_legend=1 00:20:54.957 --rc geninfo_all_blocks=1 00:20:54.957 --rc geninfo_unexecuted_blocks=1 00:20:54.957 00:20:54.957 ' 00:20:54.957 06:51:07 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.957 06:51:07 -- nvmf/common.sh@7 -- # uname -s 00:20:54.957 06:51:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.957 06:51:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.957 06:51:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.957 06:51:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.957 06:51:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.957 06:51:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.957 06:51:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.957 06:51:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.957 06:51:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.957 06:51:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.957 06:51:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:20:54.957 06:51:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:20:54.957 06:51:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.957 06:51:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.957 06:51:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.957 06:51:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.957 06:51:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.957 06:51:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.957 06:51:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.957 06:51:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.957 06:51:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.957 06:51:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.957 06:51:07 -- paths/export.sh@5 -- # export PATH 00:20:54.957 06:51:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.957 06:51:07 -- nvmf/common.sh@46 -- # : 0 00:20:54.957 06:51:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:54.957 06:51:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:54.957 06:51:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:54.957 06:51:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.957 06:51:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.957 06:51:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:54.957 06:51:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:54.957 06:51:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:54.957 06:51:07 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:54.957 06:51:07 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:54.957 06:51:07 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:54.957 06:51:07 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:54.957 06:51:07 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.957 06:51:07 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:54.957 06:51:07 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:54.957 06:51:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:54.957 06:51:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.957 06:51:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:54.957 06:51:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:54.957 06:51:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:54.957 06:51:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.957 06:51:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.957 06:51:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.957 06:51:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:54.957 06:51:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:54.958 06:51:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.958 06:51:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.958 06:51:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.958 06:51:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:54.958 06:51:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.958 06:51:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.958 06:51:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.958 06:51:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.958 06:51:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.958 06:51:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.958 06:51:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.958 06:51:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.958 06:51:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:54.958 06:51:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:54.958 Cannot find device "nvmf_tgt_br" 00:20:54.958 06:51:07 -- nvmf/common.sh@154 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.958 Cannot find device "nvmf_tgt_br2" 00:20:54.958 06:51:07 -- nvmf/common.sh@155 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:54.958 06:51:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:54.958 Cannot find device "nvmf_tgt_br" 00:20:54.958 06:51:07 -- nvmf/common.sh@157 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:54.958 Cannot find device "nvmf_tgt_br2" 00:20:54.958 06:51:07 -- nvmf/common.sh@158 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:54.958 06:51:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:54.958 06:51:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.958 06:51:07 -- nvmf/common.sh@161 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.958 06:51:07 -- nvmf/common.sh@162 -- # true 00:20:54.958 06:51:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.958 06:51:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.958 06:51:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.958 06:51:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.958 06:51:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.958 06:51:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.958 06:51:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.958 06:51:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.958 06:51:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.958 06:51:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:54.958 06:51:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:54.958 06:51:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:54.958 06:51:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:54.958 06:51:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.958 06:51:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.958 06:51:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.958 06:51:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:54.958 06:51:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:54.958 06:51:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.958 06:51:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.958 06:51:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.958 06:51:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.958 06:51:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.958 06:51:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:54.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:20:54.958 00:20:54.958 --- 10.0.0.2 ping statistics --- 00:20:54.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.958 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:54.958 06:51:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:54.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:54.958 00:20:54.958 --- 10.0.0.3 ping statistics --- 00:20:54.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.958 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:54.958 06:51:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:54.958 00:20:54.958 --- 10.0.0.1 ping statistics --- 00:20:54.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.958 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:54.958 06:51:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.958 06:51:07 -- nvmf/common.sh@421 -- # return 0 00:20:54.958 06:51:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:54.958 06:51:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.958 06:51:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:54.958 06:51:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.958 06:51:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:54.958 06:51:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:54.958 06:51:07 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:54.958 06:51:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:54.958 06:51:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.958 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.958 06:51:07 -- nvmf/common.sh@469 -- # nvmfpid=82185 00:20:54.958 06:51:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:54.958 06:51:07 -- nvmf/common.sh@470 -- # waitforlisten 82185 00:20:54.958 06:51:07 -- common/autotest_common.sh@829 -- # '[' -z 82185 ']' 00:20:54.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.958 06:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.958 06:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.958 06:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.958 06:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.958 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:54.958 [2024-12-14 06:51:07.933469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:54.958 [2024-12-14 06:51:07.933552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.958 [2024-12-14 06:51:08.069889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:54.958 [2024-12-14 06:51:08.173713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:54.958 [2024-12-14 06:51:08.173906] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.958 [2024-12-14 06:51:08.173924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.958 [2024-12-14 06:51:08.173936] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.958 [2024-12-14 06:51:08.174166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.958 [2024-12-14 06:51:08.174738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.958 [2024-12-14 06:51:08.174791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.217 06:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.217 06:51:08 -- common/autotest_common.sh@862 -- # return 0 00:20:55.217 06:51:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:55.217 06:51:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:55.217 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 06:51:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.217 06:51:09 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 [2024-12-14 06:51:09.031188] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.217 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.217 06:51:09 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 Malloc0 00:20:55.217 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.217 06:51:09 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.217 06:51:09 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.217 06:51:09 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 [2024-12-14 06:51:09.101177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.217 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.217 06:51:09 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:55.217 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.217 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 [2024-12-14 06:51:09.109082] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:55.218 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 Malloc1 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:55.218 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:55.218 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:55.218 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:55.218 06:51:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.218 06:51:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.218 06:51:09 -- host/multicontroller.sh@44 -- # bdevperf_pid=82237 00:20:55.218 06:51:09 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:55.218 06:51:09 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.218 06:51:09 -- host/multicontroller.sh@47 -- # waitforlisten 82237 /var/tmp/bdevperf.sock 00:20:55.218 06:51:09 -- common/autotest_common.sh@829 -- # '[' -z 82237 ']' 00:20:55.218 06:51:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.218 06:51:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.218 06:51:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.218 06:51:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.218 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 06:51:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.596 06:51:10 -- common/autotest_common.sh@862 -- # return 0 00:20:56.596 06:51:10 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:56.596 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 NVMe0n1 00:20:56.596 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.596 06:51:10 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.596 06:51:10 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:56.596 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.596 1 00:20:56.596 06:51:10 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:56.596 06:51:10 -- common/autotest_common.sh@650 -- # local es=0 00:20:56.596 06:51:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:56.596 06:51:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:56.596 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 2024/12/14 06:51:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:56.596 request: 00:20:56.596 { 00:20:56.596 "method": "bdev_nvme_attach_controller", 00:20:56.596 "params": { 00:20:56.596 "name": "NVMe0", 00:20:56.596 "trtype": "tcp", 00:20:56.596 "traddr": "10.0.0.2", 00:20:56.596 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:56.596 "hostaddr": "10.0.0.2", 00:20:56.596 "hostsvcid": "60000", 00:20:56.596 "adrfam": "ipv4", 00:20:56.596 "trsvcid": "4420", 00:20:56.596 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:56.596 } 00:20:56.596 } 00:20:56.596 Got JSON-RPC error response 00:20:56.596 GoRPCClient: error on JSON-RPC call 00:20:56.596 06:51:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # es=1 00:20:56.596 06:51:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.596 06:51:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.596 06:51:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.596 06:51:10 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:56.596 06:51:10 -- common/autotest_common.sh@650 -- # local es=0 00:20:56.596 06:51:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:56.596 06:51:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:56.596 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 2024/12/14 06:51:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:56.596 request: 00:20:56.596 { 00:20:56.596 "method": "bdev_nvme_attach_controller", 00:20:56.596 "params": { 00:20:56.596 "name": "NVMe0", 00:20:56.596 "trtype": "tcp", 00:20:56.596 "traddr": "10.0.0.2", 00:20:56.596 "hostaddr": "10.0.0.2", 00:20:56.596 "hostsvcid": "60000", 00:20:56.596 "adrfam": "ipv4", 00:20:56.596 "trsvcid": "4420", 00:20:56.596 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:56.596 } 00:20:56.596 } 00:20:56.596 Got JSON-RPC error response 00:20:56.596 GoRPCClient: error on JSON-RPC call 00:20:56.596 06:51:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # es=1 00:20:56.596 06:51:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.596 06:51:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.596 06:51:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.596 06:51:10 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@650 -- # local es=0 00:20:56.596 06:51:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:56.596 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.596 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.596 2024/12/14 06:51:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:56.596 request: 00:20:56.596 { 00:20:56.596 "method": "bdev_nvme_attach_controller", 00:20:56.596 "params": { 00:20:56.596 "name": "NVMe0", 00:20:56.596 "trtype": "tcp", 00:20:56.596 "traddr": "10.0.0.2", 00:20:56.596 "hostaddr": "10.0.0.2", 00:20:56.596 "hostsvcid": "60000", 00:20:56.596 "adrfam": "ipv4", 00:20:56.596 "trsvcid": "4420", 00:20:56.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.596 "multipath": "disable" 00:20:56.596 } 00:20:56.596 } 00:20:56.596 Got JSON-RPC error response 00:20:56.596 GoRPCClient: error on JSON-RPC call 00:20:56.596 06:51:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:56.596 06:51:10 -- common/autotest_common.sh@653 -- # es=1 00:20:56.596 06:51:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.597 06:51:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.597 06:51:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.597 06:51:10 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:56.597 06:51:10 -- common/autotest_common.sh@650 -- # local es=0 00:20:56.597 06:51:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:56.597 06:51:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:56.597 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.597 06:51:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:56.597 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.597 06:51:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:56.597 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.597 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.597 2024/12/14 06:51:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:56.597 request: 00:20:56.597 { 00:20:56.597 "method": "bdev_nvme_attach_controller", 00:20:56.597 "params": { 00:20:56.597 "name": "NVMe0", 00:20:56.597 "trtype": "tcp", 00:20:56.597 "traddr": "10.0.0.2", 00:20:56.597 "hostaddr": "10.0.0.2", 00:20:56.597 "hostsvcid": "60000", 00:20:56.597 "adrfam": "ipv4", 00:20:56.597 "trsvcid": "4420", 00:20:56.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.597 "multipath": "failover" 00:20:56.597 } 00:20:56.597 } 00:20:56.597 Got JSON-RPC error response 00:20:56.597 GoRPCClient: error on JSON-RPC call 00:20:56.597 06:51:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:56.597 06:51:10 -- common/autotest_common.sh@653 -- # es=1 00:20:56.597 06:51:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.597 06:51:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.597 06:51:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.597 06:51:10 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.597 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.597 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.597 00:20:56.597 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.597 06:51:10 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.597 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.597 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.597 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.597 06:51:10 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:56.597 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.597 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.597 00:20:56.597 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.597 06:51:10 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.597 06:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.597 06:51:10 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:56.597 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.597 06:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.597 06:51:10 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:56.597 06:51:10 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.975 0 00:20:57.975 06:51:11 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:57.975 06:51:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.975 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.975 06:51:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.975 06:51:11 -- host/multicontroller.sh@100 -- # killprocess 82237 00:20:57.975 06:51:11 -- common/autotest_common.sh@936 -- # '[' -z 82237 ']' 00:20:57.975 06:51:11 -- common/autotest_common.sh@940 -- # kill -0 82237 00:20:57.975 06:51:11 -- common/autotest_common.sh@941 -- # uname 00:20:57.975 06:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.975 06:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82237 00:20:57.975 killing process with pid 82237 00:20:57.975 06:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:57.975 06:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:57.975 06:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82237' 00:20:57.975 06:51:11 -- common/autotest_common.sh@955 -- # kill 82237 00:20:57.975 06:51:11 -- common/autotest_common.sh@960 -- # wait 82237 00:20:58.234 06:51:12 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:58.234 06:51:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.234 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.234 06:51:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.234 06:51:12 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:58.234 06:51:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.234 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.234 06:51:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.234 06:51:12 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:58.234 06:51:12 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:58.234 06:51:12 -- common/autotest_common.sh@1607 -- # read -r file 00:20:58.234 06:51:12 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:58.234 06:51:12 -- common/autotest_common.sh@1606 -- # sort -u 00:20:58.234 06:51:12 -- common/autotest_common.sh@1608 -- # cat 00:20:58.234 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:58.234 [2024-12-14 06:51:09.237557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:58.234 [2024-12-14 06:51:09.237678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82237 ] 00:20:58.234 [2024-12-14 06:51:09.375625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.234 [2024-12-14 06:51:09.498369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.234 [2024-12-14 06:51:10.493130] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 39ff308b-2bbd-48ab-a5eb-3a7097579015 already exists 00:20:58.234 [2024-12-14 06:51:10.493190] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:39ff308b-2bbd-48ab-a5eb-3a7097579015 alias for bdev NVMe1n1 00:20:58.235 [2024-12-14 06:51:10.493222] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:58.235 Running I/O for 1 seconds... 00:20:58.235 00:20:58.235 Latency(us) 00:20:58.235 [2024-12-14T06:51:12.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.235 [2024-12-14T06:51:12.227Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:58.235 NVMe0n1 : 1.01 18462.47 72.12 0.00 0.00 6915.26 3559.80 15966.95 00:20:58.235 [2024-12-14T06:51:12.227Z] =================================================================================================================== 00:20:58.235 [2024-12-14T06:51:12.227Z] Total : 18462.47 72.12 0.00 0.00 6915.26 3559.80 15966.95 00:20:58.235 Received shutdown signal, test time was about 1.000000 seconds 00:20:58.235 00:20:58.235 Latency(us) 00:20:58.235 [2024-12-14T06:51:12.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.235 [2024-12-14T06:51:12.227Z] =================================================================================================================== 00:20:58.235 [2024-12-14T06:51:12.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.235 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:58.235 06:51:12 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:58.235 06:51:12 -- common/autotest_common.sh@1607 -- # read -r file 00:20:58.235 06:51:12 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:58.235 06:51:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:58.235 06:51:12 -- nvmf/common.sh@116 -- # sync 00:20:58.235 06:51:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:58.235 06:51:12 -- nvmf/common.sh@119 -- # set +e 00:20:58.235 06:51:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:58.235 06:51:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:58.235 rmmod nvme_tcp 00:20:58.235 rmmod nvme_fabrics 00:20:58.235 rmmod nvme_keyring 00:20:58.235 06:51:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:58.235 06:51:12 -- nvmf/common.sh@123 -- # set -e 00:20:58.235 06:51:12 -- nvmf/common.sh@124 -- # return 0 00:20:58.235 06:51:12 -- nvmf/common.sh@477 -- # '[' -n 82185 ']' 00:20:58.235 06:51:12 -- nvmf/common.sh@478 -- # killprocess 82185 00:20:58.235 06:51:12 -- common/autotest_common.sh@936 -- # '[' -z 82185 ']' 00:20:58.235 06:51:12 -- common/autotest_common.sh@940 -- # kill -0 82185 00:20:58.235 06:51:12 -- common/autotest_common.sh@941 -- # uname 00:20:58.235 06:51:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:58.235 06:51:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82185 00:20:58.493 killing process with pid 82185 00:20:58.494 06:51:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:58.494 06:51:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:58.494 06:51:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82185' 00:20:58.494 06:51:12 -- common/autotest_common.sh@955 -- # kill 82185 00:20:58.494 06:51:12 -- common/autotest_common.sh@960 -- # wait 82185 00:20:58.752 06:51:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:58.752 06:51:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:58.752 06:51:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:58.752 06:51:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.752 06:51:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:58.752 06:51:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.752 06:51:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.752 06:51:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.752 06:51:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:58.752 00:20:58.752 real 0m5.302s 00:20:58.752 user 0m16.434s 00:20:58.752 sys 0m1.254s 00:20:58.752 06:51:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:58.752 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.752 ************************************ 00:20:58.752 END TEST nvmf_multicontroller 00:20:58.752 ************************************ 00:20:58.752 06:51:12 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:58.752 06:51:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.752 06:51:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.752 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.752 ************************************ 00:20:58.752 START TEST nvmf_aer 00:20:58.752 ************************************ 00:20:58.752 06:51:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.012 * Looking for test storage... 00:20:59.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:59.012 06:51:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:59.012 06:51:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:59.012 06:51:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:59.012 06:51:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:59.012 06:51:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:59.012 06:51:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:59.012 06:51:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:59.012 06:51:12 -- scripts/common.sh@335 -- # IFS=.-: 00:20:59.012 06:51:12 -- scripts/common.sh@335 -- # read -ra ver1 00:20:59.012 06:51:12 -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.012 06:51:12 -- scripts/common.sh@336 -- # read -ra ver2 00:20:59.012 06:51:12 -- scripts/common.sh@337 -- # local 'op=<' 00:20:59.012 06:51:12 -- scripts/common.sh@339 -- # ver1_l=2 00:20:59.012 06:51:12 -- scripts/common.sh@340 -- # ver2_l=1 00:20:59.012 06:51:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:59.012 06:51:12 -- scripts/common.sh@343 -- # case "$op" in 00:20:59.012 06:51:12 -- scripts/common.sh@344 -- # : 1 00:20:59.012 06:51:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:59.012 06:51:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.012 06:51:12 -- scripts/common.sh@364 -- # decimal 1 00:20:59.012 06:51:12 -- scripts/common.sh@352 -- # local d=1 00:20:59.012 06:51:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.012 06:51:12 -- scripts/common.sh@354 -- # echo 1 00:20:59.012 06:51:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:59.012 06:51:12 -- scripts/common.sh@365 -- # decimal 2 00:20:59.012 06:51:12 -- scripts/common.sh@352 -- # local d=2 00:20:59.012 06:51:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.012 06:51:12 -- scripts/common.sh@354 -- # echo 2 00:20:59.012 06:51:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:59.012 06:51:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:59.012 06:51:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:59.012 06:51:12 -- scripts/common.sh@367 -- # return 0 00:20:59.012 06:51:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.012 06:51:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:59.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.012 --rc genhtml_branch_coverage=1 00:20:59.012 --rc genhtml_function_coverage=1 00:20:59.012 --rc genhtml_legend=1 00:20:59.012 --rc geninfo_all_blocks=1 00:20:59.012 --rc geninfo_unexecuted_blocks=1 00:20:59.012 00:20:59.012 ' 00:20:59.012 06:51:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:59.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.012 --rc genhtml_branch_coverage=1 00:20:59.012 --rc genhtml_function_coverage=1 00:20:59.012 --rc genhtml_legend=1 00:20:59.012 --rc geninfo_all_blocks=1 00:20:59.012 --rc geninfo_unexecuted_blocks=1 00:20:59.012 00:20:59.012 ' 00:20:59.012 06:51:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:59.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.012 --rc genhtml_branch_coverage=1 00:20:59.012 --rc genhtml_function_coverage=1 00:20:59.012 --rc genhtml_legend=1 00:20:59.012 --rc geninfo_all_blocks=1 00:20:59.012 --rc geninfo_unexecuted_blocks=1 00:20:59.012 00:20:59.012 ' 00:20:59.012 06:51:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:59.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.012 --rc genhtml_branch_coverage=1 00:20:59.012 --rc genhtml_function_coverage=1 00:20:59.012 --rc genhtml_legend=1 00:20:59.012 --rc geninfo_all_blocks=1 00:20:59.012 --rc geninfo_unexecuted_blocks=1 00:20:59.012 00:20:59.012 ' 00:20:59.012 06:51:12 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.012 06:51:12 -- nvmf/common.sh@7 -- # uname -s 00:20:59.012 06:51:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.012 06:51:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.012 06:51:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.012 06:51:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.012 06:51:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.012 06:51:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.012 06:51:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.012 06:51:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.012 06:51:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.012 06:51:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.012 06:51:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:20:59.012 06:51:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:20:59.012 06:51:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.012 06:51:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.012 06:51:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:59.012 06:51:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.012 06:51:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.012 06:51:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.012 06:51:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.012 06:51:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.012 06:51:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.012 06:51:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.012 06:51:12 -- paths/export.sh@5 -- # export PATH 00:20:59.012 06:51:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.012 06:51:12 -- nvmf/common.sh@46 -- # : 0 00:20:59.012 06:51:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:59.012 06:51:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:59.012 06:51:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:59.012 06:51:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.012 06:51:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.012 06:51:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:59.013 06:51:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:59.013 06:51:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:59.013 06:51:12 -- host/aer.sh@11 -- # nvmftestinit 00:20:59.013 06:51:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:59.013 06:51:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.013 06:51:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:59.013 06:51:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:59.013 06:51:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:59.013 06:51:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.013 06:51:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.013 06:51:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.013 06:51:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:59.013 06:51:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:59.013 06:51:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:59.013 06:51:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:59.013 06:51:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:59.013 06:51:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:59.013 06:51:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.013 06:51:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.013 06:51:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:59.013 06:51:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:59.013 06:51:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:59.013 06:51:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:59.013 06:51:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:59.013 06:51:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.013 06:51:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:59.013 06:51:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:59.013 06:51:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:59.013 06:51:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:59.013 06:51:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:59.013 06:51:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:59.013 Cannot find device "nvmf_tgt_br" 00:20:59.013 06:51:12 -- nvmf/common.sh@154 -- # true 00:20:59.013 06:51:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.013 Cannot find device "nvmf_tgt_br2" 00:20:59.013 06:51:12 -- nvmf/common.sh@155 -- # true 00:20:59.013 06:51:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:59.013 06:51:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:59.013 Cannot find device "nvmf_tgt_br" 00:20:59.013 06:51:12 -- nvmf/common.sh@157 -- # true 00:20:59.013 06:51:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:59.013 Cannot find device "nvmf_tgt_br2" 00:20:59.013 06:51:12 -- nvmf/common.sh@158 -- # true 00:20:59.013 06:51:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:59.271 06:51:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:59.271 06:51:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.271 06:51:13 -- nvmf/common.sh@161 -- # true 00:20:59.271 06:51:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.271 06:51:13 -- nvmf/common.sh@162 -- # true 00:20:59.271 06:51:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.271 06:51:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.271 06:51:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.271 06:51:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.271 06:51:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.271 06:51:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.271 06:51:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.271 06:51:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:59.271 06:51:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:59.271 06:51:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:59.271 06:51:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:59.271 06:51:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:59.271 06:51:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:59.271 06:51:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.271 06:51:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.271 06:51:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.271 06:51:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:59.271 06:51:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:59.271 06:51:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.271 06:51:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.271 06:51:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.271 06:51:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.271 06:51:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.271 06:51:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:59.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:59.271 00:20:59.271 --- 10.0.0.2 ping statistics --- 00:20:59.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.271 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:59.271 06:51:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:59.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:59.271 00:20:59.271 --- 10.0.0.3 ping statistics --- 00:20:59.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.271 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:59.271 06:51:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:59.271 00:20:59.271 --- 10.0.0.1 ping statistics --- 00:20:59.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.272 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:59.272 06:51:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.272 06:51:13 -- nvmf/common.sh@421 -- # return 0 00:20:59.272 06:51:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:59.272 06:51:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.272 06:51:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:59.272 06:51:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:59.272 06:51:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.272 06:51:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:59.272 06:51:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:59.272 06:51:13 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:59.272 06:51:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:59.272 06:51:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.272 06:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:59.272 06:51:13 -- nvmf/common.sh@469 -- # nvmfpid=82494 00:20:59.272 06:51:13 -- nvmf/common.sh@470 -- # waitforlisten 82494 00:20:59.272 06:51:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:59.272 06:51:13 -- common/autotest_common.sh@829 -- # '[' -z 82494 ']' 00:20:59.272 06:51:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.272 06:51:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.272 06:51:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.272 06:51:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.272 06:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:59.530 [2024-12-14 06:51:13.277290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:59.530 [2024-12-14 06:51:13.277372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.530 [2024-12-14 06:51:13.415155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.530 [2024-12-14 06:51:13.508380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:59.530 [2024-12-14 06:51:13.508815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.530 [2024-12-14 06:51:13.508926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.530 [2024-12-14 06:51:13.509168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.530 [2024-12-14 06:51:13.509369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.530 [2024-12-14 06:51:13.509503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.530 [2024-12-14 06:51:13.510230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.530 [2024-12-14 06:51:13.510237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.464 06:51:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.464 06:51:14 -- common/autotest_common.sh@862 -- # return 0 00:21:00.464 06:51:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:00.464 06:51:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.464 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.464 06:51:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.464 06:51:14 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.464 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.464 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.464 [2024-12-14 06:51:14.398871] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.464 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.464 06:51:14 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:00.464 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.464 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 Malloc0 00:21:00.722 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.722 06:51:14 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:00.722 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.722 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.722 06:51:14 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.722 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.722 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.722 06:51:14 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.722 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.722 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 [2024-12-14 06:51:14.477973] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.722 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.722 06:51:14 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:00.722 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.722 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 [2024-12-14 06:51:14.489648] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:00.722 [ 00:21:00.722 { 00:21:00.722 "allow_any_host": true, 00:21:00.722 "hosts": [], 00:21:00.722 "listen_addresses": [], 00:21:00.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:00.722 "subtype": "Discovery" 00:21:00.722 }, 00:21:00.722 { 00:21:00.722 "allow_any_host": true, 00:21:00.722 "hosts": [], 00:21:00.722 "listen_addresses": [ 00:21:00.722 { 00:21:00.722 "adrfam": "IPv4", 00:21:00.722 "traddr": "10.0.0.2", 00:21:00.722 "transport": "TCP", 00:21:00.722 "trsvcid": "4420", 00:21:00.722 "trtype": "TCP" 00:21:00.722 } 00:21:00.722 ], 00:21:00.722 "max_cntlid": 65519, 00:21:00.722 "max_namespaces": 2, 00:21:00.722 "min_cntlid": 1, 00:21:00.722 "model_number": "SPDK bdev Controller", 00:21:00.722 "namespaces": [ 00:21:00.722 { 00:21:00.722 "bdev_name": "Malloc0", 00:21:00.722 "name": "Malloc0", 00:21:00.722 "nguid": "964BF1917BE74F0D8C193F7AC5DC6878", 00:21:00.722 "nsid": 1, 00:21:00.722 "uuid": "964bf191-7be7-4f0d-8c19-3f7ac5dc6878" 00:21:00.722 } 00:21:00.722 ], 00:21:00.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.722 "serial_number": "SPDK00000000000001", 00:21:00.722 "subtype": "NVMe" 00:21:00.722 } 00:21:00.722 ] 00:21:00.722 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.722 06:51:14 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:00.722 06:51:14 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:00.722 06:51:14 -- host/aer.sh@33 -- # aerpid=82558 00:21:00.722 06:51:14 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:00.723 06:51:14 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:00.723 06:51:14 -- common/autotest_common.sh@1254 -- # local i=0 00:21:00.723 06:51:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.723 06:51:14 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:21:00.723 06:51:14 -- common/autotest_common.sh@1257 -- # i=1 00:21:00.723 06:51:14 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:21:00.723 06:51:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.723 06:51:14 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:21:00.723 06:51:14 -- common/autotest_common.sh@1257 -- # i=2 00:21:00.723 06:51:14 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:21:00.981 06:51:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.981 06:51:14 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.981 06:51:14 -- common/autotest_common.sh@1265 -- # return 0 00:21:00.981 06:51:14 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:00.981 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.981 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.981 Malloc1 00:21:00.981 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.981 06:51:14 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:00.981 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.981 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.981 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.981 06:51:14 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:00.981 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.981 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.981 Asynchronous Event Request test 00:21:00.981 Attaching to 10.0.0.2 00:21:00.981 Attached to 10.0.0.2 00:21:00.981 Registering asynchronous event callbacks... 00:21:00.981 Starting namespace attribute notice tests for all controllers... 00:21:00.981 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:00.981 aer_cb - Changed Namespace 00:21:00.981 Cleaning up... 00:21:00.981 [ 00:21:00.981 { 00:21:00.981 "allow_any_host": true, 00:21:00.981 "hosts": [], 00:21:00.981 "listen_addresses": [], 00:21:00.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:00.981 "subtype": "Discovery" 00:21:00.981 }, 00:21:00.981 { 00:21:00.981 "allow_any_host": true, 00:21:00.981 "hosts": [], 00:21:00.981 "listen_addresses": [ 00:21:00.981 { 00:21:00.981 "adrfam": "IPv4", 00:21:00.981 "traddr": "10.0.0.2", 00:21:00.981 "transport": "TCP", 00:21:00.981 "trsvcid": "4420", 00:21:00.981 "trtype": "TCP" 00:21:00.981 } 00:21:00.981 ], 00:21:00.981 "max_cntlid": 65519, 00:21:00.981 "max_namespaces": 2, 00:21:00.981 "min_cntlid": 1, 00:21:00.981 "model_number": "SPDK bdev Controller", 00:21:00.981 "namespaces": [ 00:21:00.981 { 00:21:00.981 "bdev_name": "Malloc0", 00:21:00.981 "name": "Malloc0", 00:21:00.981 "nguid": "964BF1917BE74F0D8C193F7AC5DC6878", 00:21:00.981 "nsid": 1, 00:21:00.981 "uuid": "964bf191-7be7-4f0d-8c19-3f7ac5dc6878" 00:21:00.981 }, 00:21:00.981 { 00:21:00.981 "bdev_name": "Malloc1", 00:21:00.981 "name": "Malloc1", 00:21:00.981 "nguid": "B89CB7F63A9C437AA95E199D101844D7", 00:21:00.981 "nsid": 2, 00:21:00.981 "uuid": "b89cb7f6-3a9c-437a-a95e-199d101844d7" 00:21:00.981 } 00:21:00.981 ], 00:21:00.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.981 "serial_number": "SPDK00000000000001", 00:21:00.982 "subtype": "NVMe" 00:21:00.982 } 00:21:00.982 ] 00:21:00.982 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.982 06:51:14 -- host/aer.sh@43 -- # wait 82558 00:21:00.982 06:51:14 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:00.982 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.982 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.982 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.982 06:51:14 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:00.982 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.982 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.982 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.982 06:51:14 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.982 06:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.982 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.982 06:51:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.982 06:51:14 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:00.982 06:51:14 -- host/aer.sh@51 -- # nvmftestfini 00:21:00.982 06:51:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:00.982 06:51:14 -- nvmf/common.sh@116 -- # sync 00:21:00.982 06:51:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:00.982 06:51:14 -- nvmf/common.sh@119 -- # set +e 00:21:00.982 06:51:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:00.982 06:51:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:00.982 rmmod nvme_tcp 00:21:01.239 rmmod nvme_fabrics 00:21:01.239 rmmod nvme_keyring 00:21:01.239 06:51:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:01.239 06:51:15 -- nvmf/common.sh@123 -- # set -e 00:21:01.239 06:51:15 -- nvmf/common.sh@124 -- # return 0 00:21:01.239 06:51:15 -- nvmf/common.sh@477 -- # '[' -n 82494 ']' 00:21:01.239 06:51:15 -- nvmf/common.sh@478 -- # killprocess 82494 00:21:01.239 06:51:15 -- common/autotest_common.sh@936 -- # '[' -z 82494 ']' 00:21:01.239 06:51:15 -- common/autotest_common.sh@940 -- # kill -0 82494 00:21:01.239 06:51:15 -- common/autotest_common.sh@941 -- # uname 00:21:01.239 06:51:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.239 06:51:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82494 00:21:01.239 06:51:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:01.239 06:51:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:01.239 killing process with pid 82494 00:21:01.239 06:51:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82494' 00:21:01.239 06:51:15 -- common/autotest_common.sh@955 -- # kill 82494 00:21:01.239 [2024-12-14 06:51:15.046965] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:01.239 06:51:15 -- common/autotest_common.sh@960 -- # wait 82494 00:21:01.495 06:51:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:01.495 06:51:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:01.495 06:51:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:01.495 06:51:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.495 06:51:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:01.495 06:51:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.495 06:51:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.495 06:51:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.495 06:51:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:01.495 00:21:01.495 real 0m2.694s 00:21:01.495 user 0m7.521s 00:21:01.495 sys 0m0.724s 00:21:01.495 06:51:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:01.495 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:21:01.495 ************************************ 00:21:01.495 END TEST nvmf_aer 00:21:01.495 ************************************ 00:21:01.495 06:51:15 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:01.495 06:51:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:01.495 06:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.495 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:21:01.495 ************************************ 00:21:01.495 START TEST nvmf_async_init 00:21:01.495 ************************************ 00:21:01.495 06:51:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:01.753 * Looking for test storage... 00:21:01.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:01.753 06:51:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:01.753 06:51:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:01.753 06:51:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:01.753 06:51:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:01.753 06:51:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:01.753 06:51:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:01.753 06:51:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:01.753 06:51:15 -- scripts/common.sh@335 -- # IFS=.-: 00:21:01.753 06:51:15 -- scripts/common.sh@335 -- # read -ra ver1 00:21:01.753 06:51:15 -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.753 06:51:15 -- scripts/common.sh@336 -- # read -ra ver2 00:21:01.753 06:51:15 -- scripts/common.sh@337 -- # local 'op=<' 00:21:01.753 06:51:15 -- scripts/common.sh@339 -- # ver1_l=2 00:21:01.753 06:51:15 -- scripts/common.sh@340 -- # ver2_l=1 00:21:01.753 06:51:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:01.753 06:51:15 -- scripts/common.sh@343 -- # case "$op" in 00:21:01.753 06:51:15 -- scripts/common.sh@344 -- # : 1 00:21:01.753 06:51:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:01.753 06:51:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.753 06:51:15 -- scripts/common.sh@364 -- # decimal 1 00:21:01.753 06:51:15 -- scripts/common.sh@352 -- # local d=1 00:21:01.753 06:51:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.753 06:51:15 -- scripts/common.sh@354 -- # echo 1 00:21:01.753 06:51:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:01.753 06:51:15 -- scripts/common.sh@365 -- # decimal 2 00:21:01.753 06:51:15 -- scripts/common.sh@352 -- # local d=2 00:21:01.753 06:51:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.753 06:51:15 -- scripts/common.sh@354 -- # echo 2 00:21:01.753 06:51:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:01.753 06:51:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.753 06:51:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.753 06:51:15 -- scripts/common.sh@367 -- # return 0 00:21:01.753 06:51:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.753 06:51:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.753 --rc genhtml_branch_coverage=1 00:21:01.753 --rc genhtml_function_coverage=1 00:21:01.753 --rc genhtml_legend=1 00:21:01.753 --rc geninfo_all_blocks=1 00:21:01.753 --rc geninfo_unexecuted_blocks=1 00:21:01.753 00:21:01.753 ' 00:21:01.753 06:51:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.753 --rc genhtml_branch_coverage=1 00:21:01.753 --rc genhtml_function_coverage=1 00:21:01.753 --rc genhtml_legend=1 00:21:01.753 --rc geninfo_all_blocks=1 00:21:01.753 --rc geninfo_unexecuted_blocks=1 00:21:01.753 00:21:01.753 ' 00:21:01.753 06:51:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.753 --rc genhtml_branch_coverage=1 00:21:01.753 --rc genhtml_function_coverage=1 00:21:01.753 --rc genhtml_legend=1 00:21:01.753 --rc geninfo_all_blocks=1 00:21:01.753 --rc geninfo_unexecuted_blocks=1 00:21:01.753 00:21:01.753 ' 00:21:01.753 06:51:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.753 --rc genhtml_branch_coverage=1 00:21:01.753 --rc genhtml_function_coverage=1 00:21:01.753 --rc genhtml_legend=1 00:21:01.753 --rc geninfo_all_blocks=1 00:21:01.753 --rc geninfo_unexecuted_blocks=1 00:21:01.753 00:21:01.753 ' 00:21:01.753 06:51:15 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.753 06:51:15 -- nvmf/common.sh@7 -- # uname -s 00:21:01.754 06:51:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.754 06:51:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.754 06:51:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.754 06:51:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.754 06:51:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.754 06:51:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.754 06:51:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.754 06:51:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.754 06:51:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.754 06:51:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:01.754 06:51:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:01.754 06:51:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.754 06:51:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.754 06:51:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.754 06:51:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.754 06:51:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.754 06:51:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.754 06:51:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.754 06:51:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.754 06:51:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.754 06:51:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.754 06:51:15 -- paths/export.sh@5 -- # export PATH 00:21:01.754 06:51:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.754 06:51:15 -- nvmf/common.sh@46 -- # : 0 00:21:01.754 06:51:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:01.754 06:51:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:01.754 06:51:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:01.754 06:51:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.754 06:51:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.754 06:51:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:01.754 06:51:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:01.754 06:51:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:01.754 06:51:15 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:01.754 06:51:15 -- host/async_init.sh@14 -- # null_block_size=512 00:21:01.754 06:51:15 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:01.754 06:51:15 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:01.754 06:51:15 -- host/async_init.sh@20 -- # uuidgen 00:21:01.754 06:51:15 -- host/async_init.sh@20 -- # tr -d - 00:21:01.754 06:51:15 -- host/async_init.sh@20 -- # nguid=55635436479541a5a67a1b85cc6d5e9f 00:21:01.754 06:51:15 -- host/async_init.sh@22 -- # nvmftestinit 00:21:01.754 06:51:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:01.754 06:51:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.754 06:51:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:01.754 06:51:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:01.754 06:51:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:01.754 06:51:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.754 06:51:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.754 06:51:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.754 06:51:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:01.754 06:51:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:01.754 06:51:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.754 06:51:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.754 06:51:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:01.754 06:51:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:01.754 06:51:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:01.754 06:51:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:01.754 06:51:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:01.754 06:51:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.754 06:51:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:01.754 06:51:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:01.754 06:51:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:01.754 06:51:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:01.754 06:51:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:01.754 06:51:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:01.754 Cannot find device "nvmf_tgt_br" 00:21:01.754 06:51:15 -- nvmf/common.sh@154 -- # true 00:21:01.754 06:51:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.754 Cannot find device "nvmf_tgt_br2" 00:21:01.754 06:51:15 -- nvmf/common.sh@155 -- # true 00:21:01.754 06:51:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:01.754 06:51:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:01.754 Cannot find device "nvmf_tgt_br" 00:21:01.754 06:51:15 -- nvmf/common.sh@157 -- # true 00:21:01.754 06:51:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:01.754 Cannot find device "nvmf_tgt_br2" 00:21:01.754 06:51:15 -- nvmf/common.sh@158 -- # true 00:21:01.754 06:51:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:02.013 06:51:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:02.013 06:51:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.013 06:51:15 -- nvmf/common.sh@161 -- # true 00:21:02.013 06:51:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.013 06:51:15 -- nvmf/common.sh@162 -- # true 00:21:02.013 06:51:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.013 06:51:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.013 06:51:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.013 06:51:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.013 06:51:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.013 06:51:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.013 06:51:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.013 06:51:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:02.013 06:51:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:02.013 06:51:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:02.013 06:51:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:02.013 06:51:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:02.013 06:51:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:02.013 06:51:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.013 06:51:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.013 06:51:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.013 06:51:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:02.013 06:51:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:02.013 06:51:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.013 06:51:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.013 06:51:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.013 06:51:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.013 06:51:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.272 06:51:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:02.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:02.272 00:21:02.272 --- 10.0.0.2 ping statistics --- 00:21:02.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.272 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:02.272 06:51:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:02.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:21:02.273 00:21:02.273 --- 10.0.0.3 ping statistics --- 00:21:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.273 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:02.273 06:51:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:02.273 00:21:02.273 --- 10.0.0.1 ping statistics --- 00:21:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.273 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:02.273 06:51:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.273 06:51:16 -- nvmf/common.sh@421 -- # return 0 00:21:02.273 06:51:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:02.273 06:51:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.273 06:51:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:02.273 06:51:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:02.273 06:51:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.273 06:51:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:02.273 06:51:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:02.273 06:51:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:02.273 06:51:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:02.273 06:51:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.273 06:51:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.273 06:51:16 -- nvmf/common.sh@469 -- # nvmfpid=82735 00:21:02.273 06:51:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:02.273 06:51:16 -- nvmf/common.sh@470 -- # waitforlisten 82735 00:21:02.273 06:51:16 -- common/autotest_common.sh@829 -- # '[' -z 82735 ']' 00:21:02.273 06:51:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.273 06:51:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.273 06:51:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.273 06:51:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.273 06:51:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.273 [2024-12-14 06:51:16.095432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:02.273 [2024-12-14 06:51:16.095854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.273 [2024-12-14 06:51:16.229733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.531 [2024-12-14 06:51:16.313799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.531 [2024-12-14 06:51:16.314023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.531 [2024-12-14 06:51:16.314038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.531 [2024-12-14 06:51:16.314047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.531 [2024-12-14 06:51:16.314083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.099 06:51:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.099 06:51:17 -- common/autotest_common.sh@862 -- # return 0 00:21:03.099 06:51:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.099 06:51:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.099 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 06:51:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.358 06:51:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 [2024-12-14 06:51:17.124842] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 null0 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 55635436479541a5a67a1b85cc6d5e9f 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.358 [2024-12-14 06:51:17.164987] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.358 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.358 06:51:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:03.358 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.358 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 nvme0n1 00:21:03.618 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.618 06:51:17 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.618 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 [ 00:21:03.618 { 00:21:03.618 "aliases": [ 00:21:03.618 "55635436-4795-41a5-a67a-1b85cc6d5e9f" 00:21:03.618 ], 00:21:03.618 "assigned_rate_limits": { 00:21:03.618 "r_mbytes_per_sec": 0, 00:21:03.618 "rw_ios_per_sec": 0, 00:21:03.618 "rw_mbytes_per_sec": 0, 00:21:03.618 "w_mbytes_per_sec": 0 00:21:03.618 }, 00:21:03.618 "block_size": 512, 00:21:03.618 "claimed": false, 00:21:03.618 "driver_specific": { 00:21:03.618 "mp_policy": "active_passive", 00:21:03.618 "nvme": [ 00:21:03.618 { 00:21:03.618 "ctrlr_data": { 00:21:03.618 "ana_reporting": false, 00:21:03.618 "cntlid": 1, 00:21:03.618 "firmware_revision": "24.01.1", 00:21:03.618 "model_number": "SPDK bdev Controller", 00:21:03.618 "multi_ctrlr": true, 00:21:03.618 "oacs": { 00:21:03.618 "firmware": 0, 00:21:03.618 "format": 0, 00:21:03.618 "ns_manage": 0, 00:21:03.618 "security": 0 00:21:03.618 }, 00:21:03.618 "serial_number": "00000000000000000000", 00:21:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.618 "vendor_id": "0x8086" 00:21:03.618 }, 00:21:03.618 "ns_data": { 00:21:03.618 "can_share": true, 00:21:03.618 "id": 1 00:21:03.618 }, 00:21:03.618 "trid": { 00:21:03.618 "adrfam": "IPv4", 00:21:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.618 "traddr": "10.0.0.2", 00:21:03.618 "trsvcid": "4420", 00:21:03.618 "trtype": "TCP" 00:21:03.618 }, 00:21:03.618 "vs": { 00:21:03.618 "nvme_version": "1.3" 00:21:03.618 } 00:21:03.618 } 00:21:03.618 ] 00:21:03.618 }, 00:21:03.618 "name": "nvme0n1", 00:21:03.618 "num_blocks": 2097152, 00:21:03.618 "product_name": "NVMe disk", 00:21:03.618 "supported_io_types": { 00:21:03.618 "abort": true, 00:21:03.618 "compare": true, 00:21:03.618 "compare_and_write": true, 00:21:03.618 "flush": true, 00:21:03.618 "nvme_admin": true, 00:21:03.618 "nvme_io": true, 00:21:03.618 "read": true, 00:21:03.618 "reset": true, 00:21:03.618 "unmap": false, 00:21:03.618 "write": true, 00:21:03.618 "write_zeroes": true 00:21:03.618 }, 00:21:03.618 "uuid": "55635436-4795-41a5-a67a-1b85cc6d5e9f", 00:21:03.618 "zoned": false 00:21:03.618 } 00:21:03.618 ] 00:21:03.618 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.618 06:51:17 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:03.618 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 [2024-12-14 06:51:17.420895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:03.618 [2024-12-14 06:51:17.421027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ecf90 (9): Bad file descriptor 00:21:03.618 [2024-12-14 06:51:17.553056] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:03.618 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.618 06:51:17 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.618 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 [ 00:21:03.618 { 00:21:03.618 "aliases": [ 00:21:03.618 "55635436-4795-41a5-a67a-1b85cc6d5e9f" 00:21:03.618 ], 00:21:03.618 "assigned_rate_limits": { 00:21:03.618 "r_mbytes_per_sec": 0, 00:21:03.618 "rw_ios_per_sec": 0, 00:21:03.618 "rw_mbytes_per_sec": 0, 00:21:03.618 "w_mbytes_per_sec": 0 00:21:03.618 }, 00:21:03.618 "block_size": 512, 00:21:03.618 "claimed": false, 00:21:03.618 "driver_specific": { 00:21:03.618 "mp_policy": "active_passive", 00:21:03.618 "nvme": [ 00:21:03.618 { 00:21:03.618 "ctrlr_data": { 00:21:03.618 "ana_reporting": false, 00:21:03.618 "cntlid": 2, 00:21:03.618 "firmware_revision": "24.01.1", 00:21:03.618 "model_number": "SPDK bdev Controller", 00:21:03.618 "multi_ctrlr": true, 00:21:03.618 "oacs": { 00:21:03.618 "firmware": 0, 00:21:03.618 "format": 0, 00:21:03.618 "ns_manage": 0, 00:21:03.618 "security": 0 00:21:03.618 }, 00:21:03.618 "serial_number": "00000000000000000000", 00:21:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.618 "vendor_id": "0x8086" 00:21:03.618 }, 00:21:03.618 "ns_data": { 00:21:03.618 "can_share": true, 00:21:03.618 "id": 1 00:21:03.618 }, 00:21:03.618 "trid": { 00:21:03.618 "adrfam": "IPv4", 00:21:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.618 "traddr": "10.0.0.2", 00:21:03.618 "trsvcid": "4420", 00:21:03.618 "trtype": "TCP" 00:21:03.618 }, 00:21:03.618 "vs": { 00:21:03.618 "nvme_version": "1.3" 00:21:03.618 } 00:21:03.618 } 00:21:03.618 ] 00:21:03.618 }, 00:21:03.618 "name": "nvme0n1", 00:21:03.618 "num_blocks": 2097152, 00:21:03.618 "product_name": "NVMe disk", 00:21:03.618 "supported_io_types": { 00:21:03.618 "abort": true, 00:21:03.618 "compare": true, 00:21:03.618 "compare_and_write": true, 00:21:03.618 "flush": true, 00:21:03.618 "nvme_admin": true, 00:21:03.618 "nvme_io": true, 00:21:03.618 "read": true, 00:21:03.618 "reset": true, 00:21:03.618 "unmap": false, 00:21:03.618 "write": true, 00:21:03.618 "write_zeroes": true 00:21:03.618 }, 00:21:03.618 "uuid": "55635436-4795-41a5-a67a-1b85cc6d5e9f", 00:21:03.618 "zoned": false 00:21:03.618 } 00:21:03.618 ] 00:21:03.618 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.618 06:51:17 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.618 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.618 06:51:17 -- host/async_init.sh@53 -- # mktemp 00:21:03.618 06:51:17 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AEhsQqten2 00:21:03.618 06:51:17 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:03.618 06:51:17 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AEhsQqten2 00:21:03.618 06:51:17 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.618 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.877 06:51:17 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:03.877 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.877 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 [2024-12-14 06:51:17.617018] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.877 [2024-12-14 06:51:17.617171] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:03.877 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.877 06:51:17 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AEhsQqten2 00:21:03.877 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.877 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.877 06:51:17 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AEhsQqten2 00:21:03.877 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.877 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 [2024-12-14 06:51:17.633012] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.877 nvme0n1 00:21:03.877 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.877 06:51:17 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.877 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.877 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 [ 00:21:03.877 { 00:21:03.877 "aliases": [ 00:21:03.877 "55635436-4795-41a5-a67a-1b85cc6d5e9f" 00:21:03.877 ], 00:21:03.877 "assigned_rate_limits": { 00:21:03.877 "r_mbytes_per_sec": 0, 00:21:03.877 "rw_ios_per_sec": 0, 00:21:03.877 "rw_mbytes_per_sec": 0, 00:21:03.877 "w_mbytes_per_sec": 0 00:21:03.877 }, 00:21:03.877 "block_size": 512, 00:21:03.877 "claimed": false, 00:21:03.877 "driver_specific": { 00:21:03.877 "mp_policy": "active_passive", 00:21:03.877 "nvme": [ 00:21:03.877 { 00:21:03.877 "ctrlr_data": { 00:21:03.877 "ana_reporting": false, 00:21:03.877 "cntlid": 3, 00:21:03.877 "firmware_revision": "24.01.1", 00:21:03.877 "model_number": "SPDK bdev Controller", 00:21:03.877 "multi_ctrlr": true, 00:21:03.877 "oacs": { 00:21:03.877 "firmware": 0, 00:21:03.877 "format": 0, 00:21:03.877 "ns_manage": 0, 00:21:03.877 "security": 0 00:21:03.877 }, 00:21:03.877 "serial_number": "00000000000000000000", 00:21:03.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.877 "vendor_id": "0x8086" 00:21:03.877 }, 00:21:03.877 "ns_data": { 00:21:03.877 "can_share": true, 00:21:03.877 "id": 1 00:21:03.877 }, 00:21:03.877 "trid": { 00:21:03.877 "adrfam": "IPv4", 00:21:03.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.877 "traddr": "10.0.0.2", 00:21:03.877 "trsvcid": "4421", 00:21:03.877 "trtype": "TCP" 00:21:03.877 }, 00:21:03.877 "vs": { 00:21:03.877 "nvme_version": "1.3" 00:21:03.877 } 00:21:03.877 } 00:21:03.877 ] 00:21:03.877 }, 00:21:03.877 "name": "nvme0n1", 00:21:03.877 "num_blocks": 2097152, 00:21:03.877 "product_name": "NVMe disk", 00:21:03.877 "supported_io_types": { 00:21:03.878 "abort": true, 00:21:03.878 "compare": true, 00:21:03.878 "compare_and_write": true, 00:21:03.878 "flush": true, 00:21:03.878 "nvme_admin": true, 00:21:03.878 "nvme_io": true, 00:21:03.878 "read": true, 00:21:03.878 "reset": true, 00:21:03.878 "unmap": false, 00:21:03.878 "write": true, 00:21:03.878 "write_zeroes": true 00:21:03.878 }, 00:21:03.878 "uuid": "55635436-4795-41a5-a67a-1b85cc6d5e9f", 00:21:03.878 "zoned": false 00:21:03.878 } 00:21:03.878 ] 00:21:03.878 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.878 06:51:17 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.878 06:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.878 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.878 06:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.878 06:51:17 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.AEhsQqten2 00:21:03.878 06:51:17 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:03.878 06:51:17 -- host/async_init.sh@78 -- # nvmftestfini 00:21:03.878 06:51:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:03.878 06:51:17 -- nvmf/common.sh@116 -- # sync 00:21:03.878 06:51:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:03.878 06:51:17 -- nvmf/common.sh@119 -- # set +e 00:21:03.878 06:51:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:03.878 06:51:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:03.878 rmmod nvme_tcp 00:21:03.878 rmmod nvme_fabrics 00:21:03.878 rmmod nvme_keyring 00:21:03.878 06:51:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:03.878 06:51:17 -- nvmf/common.sh@123 -- # set -e 00:21:03.878 06:51:17 -- nvmf/common.sh@124 -- # return 0 00:21:03.878 06:51:17 -- nvmf/common.sh@477 -- # '[' -n 82735 ']' 00:21:03.878 06:51:17 -- nvmf/common.sh@478 -- # killprocess 82735 00:21:03.878 06:51:17 -- common/autotest_common.sh@936 -- # '[' -z 82735 ']' 00:21:03.878 06:51:17 -- common/autotest_common.sh@940 -- # kill -0 82735 00:21:03.878 06:51:17 -- common/autotest_common.sh@941 -- # uname 00:21:03.878 06:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.878 06:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82735 00:21:04.136 06:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.136 06:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.136 killing process with pid 82735 00:21:04.136 06:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82735' 00:21:04.136 06:51:17 -- common/autotest_common.sh@955 -- # kill 82735 00:21:04.136 06:51:17 -- common/autotest_common.sh@960 -- # wait 82735 00:21:04.395 06:51:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:04.395 06:51:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:04.395 06:51:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:04.395 06:51:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.395 06:51:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:04.395 06:51:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.395 06:51:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.395 06:51:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.395 06:51:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:04.395 00:21:04.395 real 0m2.766s 00:21:04.395 user 0m2.500s 00:21:04.395 sys 0m0.696s 00:21:04.395 06:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:04.395 ************************************ 00:21:04.395 END TEST nvmf_async_init 00:21:04.395 ************************************ 00:21:04.395 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:21:04.395 06:51:18 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:04.395 06:51:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:04.395 06:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.395 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:21:04.395 ************************************ 00:21:04.395 START TEST dma 00:21:04.395 ************************************ 00:21:04.395 06:51:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:04.395 * Looking for test storage... 00:21:04.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:04.395 06:51:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:04.395 06:51:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:04.395 06:51:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:04.681 06:51:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:04.681 06:51:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:04.681 06:51:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:04.681 06:51:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:04.681 06:51:18 -- scripts/common.sh@335 -- # IFS=.-: 00:21:04.681 06:51:18 -- scripts/common.sh@335 -- # read -ra ver1 00:21:04.681 06:51:18 -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.681 06:51:18 -- scripts/common.sh@336 -- # read -ra ver2 00:21:04.681 06:51:18 -- scripts/common.sh@337 -- # local 'op=<' 00:21:04.681 06:51:18 -- scripts/common.sh@339 -- # ver1_l=2 00:21:04.681 06:51:18 -- scripts/common.sh@340 -- # ver2_l=1 00:21:04.681 06:51:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:04.681 06:51:18 -- scripts/common.sh@343 -- # case "$op" in 00:21:04.681 06:51:18 -- scripts/common.sh@344 -- # : 1 00:21:04.681 06:51:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:04.681 06:51:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.681 06:51:18 -- scripts/common.sh@364 -- # decimal 1 00:21:04.681 06:51:18 -- scripts/common.sh@352 -- # local d=1 00:21:04.681 06:51:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.681 06:51:18 -- scripts/common.sh@354 -- # echo 1 00:21:04.681 06:51:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:04.681 06:51:18 -- scripts/common.sh@365 -- # decimal 2 00:21:04.681 06:51:18 -- scripts/common.sh@352 -- # local d=2 00:21:04.681 06:51:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.681 06:51:18 -- scripts/common.sh@354 -- # echo 2 00:21:04.681 06:51:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:04.681 06:51:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:04.681 06:51:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:04.681 06:51:18 -- scripts/common.sh@367 -- # return 0 00:21:04.681 06:51:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.681 06:51:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.681 --rc genhtml_branch_coverage=1 00:21:04.681 --rc genhtml_function_coverage=1 00:21:04.681 --rc genhtml_legend=1 00:21:04.681 --rc geninfo_all_blocks=1 00:21:04.681 --rc geninfo_unexecuted_blocks=1 00:21:04.681 00:21:04.681 ' 00:21:04.681 06:51:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.681 --rc genhtml_branch_coverage=1 00:21:04.681 --rc genhtml_function_coverage=1 00:21:04.681 --rc genhtml_legend=1 00:21:04.681 --rc geninfo_all_blocks=1 00:21:04.681 --rc geninfo_unexecuted_blocks=1 00:21:04.681 00:21:04.681 ' 00:21:04.681 06:51:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.681 --rc genhtml_branch_coverage=1 00:21:04.681 --rc genhtml_function_coverage=1 00:21:04.681 --rc genhtml_legend=1 00:21:04.681 --rc geninfo_all_blocks=1 00:21:04.681 --rc geninfo_unexecuted_blocks=1 00:21:04.681 00:21:04.681 ' 00:21:04.681 06:51:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.681 --rc genhtml_branch_coverage=1 00:21:04.681 --rc genhtml_function_coverage=1 00:21:04.681 --rc genhtml_legend=1 00:21:04.681 --rc geninfo_all_blocks=1 00:21:04.681 --rc geninfo_unexecuted_blocks=1 00:21:04.681 00:21:04.681 ' 00:21:04.681 06:51:18 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.681 06:51:18 -- nvmf/common.sh@7 -- # uname -s 00:21:04.681 06:51:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.681 06:51:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.681 06:51:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.681 06:51:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.681 06:51:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.681 06:51:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.681 06:51:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.681 06:51:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.681 06:51:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.681 06:51:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.681 06:51:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:04.681 06:51:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:04.681 06:51:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.681 06:51:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.681 06:51:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.681 06:51:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.681 06:51:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.681 06:51:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.681 06:51:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.681 06:51:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.682 06:51:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.682 06:51:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.682 06:51:18 -- paths/export.sh@5 -- # export PATH 00:21:04.682 06:51:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.682 06:51:18 -- nvmf/common.sh@46 -- # : 0 00:21:04.682 06:51:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:04.682 06:51:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:04.682 06:51:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:04.682 06:51:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.682 06:51:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.682 06:51:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:04.682 06:51:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:04.682 06:51:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:04.682 06:51:18 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:04.682 06:51:18 -- host/dma.sh@13 -- # exit 0 00:21:04.682 00:21:04.682 real 0m0.206s 00:21:04.682 user 0m0.128s 00:21:04.682 sys 0m0.091s 00:21:04.682 06:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:04.682 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:21:04.682 ************************************ 00:21:04.682 END TEST dma 00:21:04.682 ************************************ 00:21:04.682 06:51:18 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:04.682 06:51:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:04.682 06:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.682 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:21:04.682 ************************************ 00:21:04.682 START TEST nvmf_identify 00:21:04.682 ************************************ 00:21:04.682 06:51:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:04.682 * Looking for test storage... 00:21:04.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:04.682 06:51:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:04.682 06:51:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:04.682 06:51:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:04.950 06:51:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:04.950 06:51:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:04.950 06:51:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:04.950 06:51:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:04.950 06:51:18 -- scripts/common.sh@335 -- # IFS=.-: 00:21:04.950 06:51:18 -- scripts/common.sh@335 -- # read -ra ver1 00:21:04.950 06:51:18 -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.950 06:51:18 -- scripts/common.sh@336 -- # read -ra ver2 00:21:04.950 06:51:18 -- scripts/common.sh@337 -- # local 'op=<' 00:21:04.950 06:51:18 -- scripts/common.sh@339 -- # ver1_l=2 00:21:04.950 06:51:18 -- scripts/common.sh@340 -- # ver2_l=1 00:21:04.950 06:51:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:04.950 06:51:18 -- scripts/common.sh@343 -- # case "$op" in 00:21:04.950 06:51:18 -- scripts/common.sh@344 -- # : 1 00:21:04.950 06:51:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:04.950 06:51:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.950 06:51:18 -- scripts/common.sh@364 -- # decimal 1 00:21:04.950 06:51:18 -- scripts/common.sh@352 -- # local d=1 00:21:04.950 06:51:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.950 06:51:18 -- scripts/common.sh@354 -- # echo 1 00:21:04.950 06:51:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:04.950 06:51:18 -- scripts/common.sh@365 -- # decimal 2 00:21:04.950 06:51:18 -- scripts/common.sh@352 -- # local d=2 00:21:04.950 06:51:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.950 06:51:18 -- scripts/common.sh@354 -- # echo 2 00:21:04.950 06:51:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:04.950 06:51:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:04.950 06:51:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:04.950 06:51:18 -- scripts/common.sh@367 -- # return 0 00:21:04.950 06:51:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.950 06:51:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:04.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.950 --rc genhtml_branch_coverage=1 00:21:04.950 --rc genhtml_function_coverage=1 00:21:04.950 --rc genhtml_legend=1 00:21:04.950 --rc geninfo_all_blocks=1 00:21:04.950 --rc geninfo_unexecuted_blocks=1 00:21:04.950 00:21:04.950 ' 00:21:04.950 06:51:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:04.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.950 --rc genhtml_branch_coverage=1 00:21:04.950 --rc genhtml_function_coverage=1 00:21:04.950 --rc genhtml_legend=1 00:21:04.950 --rc geninfo_all_blocks=1 00:21:04.950 --rc geninfo_unexecuted_blocks=1 00:21:04.950 00:21:04.950 ' 00:21:04.950 06:51:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:04.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.950 --rc genhtml_branch_coverage=1 00:21:04.950 --rc genhtml_function_coverage=1 00:21:04.950 --rc genhtml_legend=1 00:21:04.950 --rc geninfo_all_blocks=1 00:21:04.950 --rc geninfo_unexecuted_blocks=1 00:21:04.950 00:21:04.950 ' 00:21:04.950 06:51:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:04.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.950 --rc genhtml_branch_coverage=1 00:21:04.950 --rc genhtml_function_coverage=1 00:21:04.950 --rc genhtml_legend=1 00:21:04.950 --rc geninfo_all_blocks=1 00:21:04.950 --rc geninfo_unexecuted_blocks=1 00:21:04.950 00:21:04.950 ' 00:21:04.950 06:51:18 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.950 06:51:18 -- nvmf/common.sh@7 -- # uname -s 00:21:04.950 06:51:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.950 06:51:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.950 06:51:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.950 06:51:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.950 06:51:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.950 06:51:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.950 06:51:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.950 06:51:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.950 06:51:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.950 06:51:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.950 06:51:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:04.950 06:51:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:04.950 06:51:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.950 06:51:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.950 06:51:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.950 06:51:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.950 06:51:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.950 06:51:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.950 06:51:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.950 06:51:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.950 06:51:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.950 06:51:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.950 06:51:18 -- paths/export.sh@5 -- # export PATH 00:21:04.951 06:51:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.951 06:51:18 -- nvmf/common.sh@46 -- # : 0 00:21:04.951 06:51:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:04.951 06:51:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:04.951 06:51:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:04.951 06:51:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.951 06:51:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.951 06:51:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:04.951 06:51:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:04.951 06:51:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:04.951 06:51:18 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.951 06:51:18 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.951 06:51:18 -- host/identify.sh@14 -- # nvmftestinit 00:21:04.951 06:51:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:04.951 06:51:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.951 06:51:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:04.951 06:51:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:04.951 06:51:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:04.951 06:51:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.951 06:51:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.951 06:51:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.951 06:51:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:04.951 06:51:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:04.951 06:51:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:04.951 06:51:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:04.951 06:51:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:04.951 06:51:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:04.951 06:51:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.951 06:51:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.951 06:51:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:04.951 06:51:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:04.951 06:51:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.951 06:51:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.951 06:51:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.951 06:51:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.951 06:51:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.951 06:51:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.951 06:51:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.951 06:51:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.951 06:51:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:04.951 06:51:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:04.951 Cannot find device "nvmf_tgt_br" 00:21:04.951 06:51:18 -- nvmf/common.sh@154 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.951 Cannot find device "nvmf_tgt_br2" 00:21:04.951 06:51:18 -- nvmf/common.sh@155 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:04.951 06:51:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:04.951 Cannot find device "nvmf_tgt_br" 00:21:04.951 06:51:18 -- nvmf/common.sh@157 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:04.951 Cannot find device "nvmf_tgt_br2" 00:21:04.951 06:51:18 -- nvmf/common.sh@158 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:04.951 06:51:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:04.951 06:51:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.951 06:51:18 -- nvmf/common.sh@161 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.951 06:51:18 -- nvmf/common.sh@162 -- # true 00:21:04.951 06:51:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.951 06:51:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.951 06:51:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.951 06:51:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.951 06:51:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.951 06:51:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.951 06:51:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.951 06:51:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:04.951 06:51:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:04.951 06:51:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:04.951 06:51:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:04.951 06:51:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:04.951 06:51:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:04.951 06:51:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.210 06:51:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:05.210 06:51:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:05.210 06:51:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:05.210 06:51:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:05.210 06:51:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:05.210 06:51:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:05.210 06:51:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:05.210 06:51:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:05.210 06:51:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:05.210 06:51:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:05.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:05.210 00:21:05.210 --- 10.0.0.2 ping statistics --- 00:21:05.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.210 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:05.210 06:51:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:05.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:05.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:05.210 00:21:05.210 --- 10.0.0.3 ping statistics --- 00:21:05.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.210 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:05.210 06:51:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:05.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:05.210 00:21:05.210 --- 10.0.0.1 ping statistics --- 00:21:05.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.210 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:05.210 06:51:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.210 06:51:19 -- nvmf/common.sh@421 -- # return 0 00:21:05.210 06:51:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:05.210 06:51:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.210 06:51:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:05.210 06:51:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:05.210 06:51:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.210 06:51:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:05.210 06:51:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:05.210 06:51:19 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:05.210 06:51:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.210 06:51:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.210 06:51:19 -- host/identify.sh@19 -- # nvmfpid=83016 00:21:05.210 06:51:19 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.210 06:51:19 -- host/identify.sh@23 -- # waitforlisten 83016 00:21:05.210 06:51:19 -- common/autotest_common.sh@829 -- # '[' -z 83016 ']' 00:21:05.211 06:51:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.211 06:51:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.211 06:51:19 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:05.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.211 06:51:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.211 06:51:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.211 06:51:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.211 [2024-12-14 06:51:19.118788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:05.211 [2024-12-14 06:51:19.118882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.469 [2024-12-14 06:51:19.259734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.469 [2024-12-14 06:51:19.357659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:05.469 [2024-12-14 06:51:19.357816] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.469 [2024-12-14 06:51:19.357829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.469 [2024-12-14 06:51:19.357837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.469 [2024-12-14 06:51:19.358063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.469 [2024-12-14 06:51:19.358600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.469 [2024-12-14 06:51:19.359253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.469 [2024-12-14 06:51:19.359257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.406 06:51:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.406 06:51:20 -- common/autotest_common.sh@862 -- # return 0 00:21:06.406 06:51:20 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.406 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.406 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 [2024-12-14 06:51:20.156679] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.406 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.406 06:51:20 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:06.406 06:51:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.406 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 06:51:20 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:06.406 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.406 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 Malloc0 00:21:06.406 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.406 06:51:20 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.406 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.406 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.406 06:51:20 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:06.406 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.406 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.407 06:51:20 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.407 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.407 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.407 [2024-12-14 06:51:20.271307] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.407 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.407 06:51:20 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:06.407 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.407 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.407 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.407 06:51:20 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:06.407 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.407 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.407 [2024-12-14 06:51:20.291085] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:06.407 [ 00:21:06.407 { 00:21:06.407 "allow_any_host": true, 00:21:06.407 "hosts": [], 00:21:06.407 "listen_addresses": [ 00:21:06.407 { 00:21:06.407 "adrfam": "IPv4", 00:21:06.407 "traddr": "10.0.0.2", 00:21:06.407 "transport": "TCP", 00:21:06.407 "trsvcid": "4420", 00:21:06.407 "trtype": "TCP" 00:21:06.407 } 00:21:06.407 ], 00:21:06.407 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:06.407 "subtype": "Discovery" 00:21:06.407 }, 00:21:06.407 { 00:21:06.407 "allow_any_host": true, 00:21:06.407 "hosts": [], 00:21:06.407 "listen_addresses": [ 00:21:06.407 { 00:21:06.407 "adrfam": "IPv4", 00:21:06.407 "traddr": "10.0.0.2", 00:21:06.407 "transport": "TCP", 00:21:06.407 "trsvcid": "4420", 00:21:06.407 "trtype": "TCP" 00:21:06.407 } 00:21:06.407 ], 00:21:06.407 "max_cntlid": 65519, 00:21:06.407 "max_namespaces": 32, 00:21:06.407 "min_cntlid": 1, 00:21:06.407 "model_number": "SPDK bdev Controller", 00:21:06.407 "namespaces": [ 00:21:06.407 { 00:21:06.407 "bdev_name": "Malloc0", 00:21:06.407 "eui64": "ABCDEF0123456789", 00:21:06.407 "name": "Malloc0", 00:21:06.407 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:06.407 "nsid": 1, 00:21:06.407 "uuid": "ae34f6c0-f467-4057-982f-b2228a1618a3" 00:21:06.407 } 00:21:06.407 ], 00:21:06.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.407 "serial_number": "SPDK00000000000001", 00:21:06.407 "subtype": "NVMe" 00:21:06.407 } 00:21:06.407 ] 00:21:06.407 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.407 06:51:20 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:06.407 [2024-12-14 06:51:20.331583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:06.407 [2024-12-14 06:51:20.331634] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83069 ] 00:21:06.669 [2024-12-14 06:51:20.469115] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:06.669 [2024-12-14 06:51:20.469200] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:06.669 [2024-12-14 06:51:20.469207] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:06.669 [2024-12-14 06:51:20.469217] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:06.669 [2024-12-14 06:51:20.469227] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:06.669 [2024-12-14 06:51:20.469434] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:06.669 [2024-12-14 06:51:20.469498] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d68d30 0 00:21:06.669 [2024-12-14 06:51:20.474051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:06.669 [2024-12-14 06:51:20.474075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:06.669 [2024-12-14 06:51:20.474081] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:06.669 [2024-12-14 06:51:20.474085] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:06.669 [2024-12-14 06:51:20.474137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.474146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.474150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.669 [2024-12-14 06:51:20.474181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:06.669 [2024-12-14 06:51:20.474230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.669 [2024-12-14 06:51:20.482045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.669 [2024-12-14 06:51:20.482078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.669 [2024-12-14 06:51:20.482100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.669 [2024-12-14 06:51:20.482119] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:06.669 [2024-12-14 06:51:20.482127] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:06.669 [2024-12-14 06:51:20.482133] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:06.669 [2024-12-14 06:51:20.482151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.669 [2024-12-14 06:51:20.482170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.669 [2024-12-14 06:51:20.482199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.669 [2024-12-14 06:51:20.482294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.669 [2024-12-14 06:51:20.482302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.669 [2024-12-14 06:51:20.482306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.669 [2024-12-14 06:51:20.482318] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:06.669 [2024-12-14 06:51:20.482326] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:06.669 [2024-12-14 06:51:20.482334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.669 [2024-12-14 06:51:20.482361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.669 [2024-12-14 06:51:20.482417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.669 [2024-12-14 06:51:20.482487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.669 [2024-12-14 06:51:20.482494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.669 [2024-12-14 06:51:20.482498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.669 [2024-12-14 06:51:20.482509] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:06.669 [2024-12-14 06:51:20.482518] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:06.669 [2024-12-14 06:51:20.482525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.669 [2024-12-14 06:51:20.482541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.669 [2024-12-14 06:51:20.482560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.669 [2024-12-14 06:51:20.482613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.669 [2024-12-14 06:51:20.482620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.669 [2024-12-14 06:51:20.482624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482628] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.669 [2024-12-14 06:51:20.482635] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:06.669 [2024-12-14 06:51:20.482645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.669 [2024-12-14 06:51:20.482661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.669 [2024-12-14 06:51:20.482679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.669 [2024-12-14 06:51:20.482732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.669 [2024-12-14 06:51:20.482739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.669 [2024-12-14 06:51:20.482743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.669 [2024-12-14 06:51:20.482753] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:06.669 [2024-12-14 06:51:20.482758] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:06.669 [2024-12-14 06:51:20.482766] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:06.669 [2024-12-14 06:51:20.482872] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:06.669 [2024-12-14 06:51:20.482877] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:06.669 [2024-12-14 06:51:20.482887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.669 [2024-12-14 06:51:20.482895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.482902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.670 [2024-12-14 06:51:20.482921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.670 [2024-12-14 06:51:20.483016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.483023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.483028] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.670 [2024-12-14 06:51:20.483052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:06.670 [2024-12-14 06:51:20.483064] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483068] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.670 [2024-12-14 06:51:20.483101] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.670 [2024-12-14 06:51:20.483167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.483174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.483178] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483182] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.670 [2024-12-14 06:51:20.483189] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:06.670 [2024-12-14 06:51:20.483194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:06.670 [2024-12-14 06:51:20.483220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.670 [2024-12-14 06:51:20.483271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.670 [2024-12-14 06:51:20.483426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.670 [2024-12-14 06:51:20.483433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.670 [2024-12-14 06:51:20.483437] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483441] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d68d30): datao=0, datal=4096, cccid=0 00:21:06.670 [2024-12-14 06:51:20.483446] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6f30) on tqpair(0x1d68d30): expected_datao=0, payload_size=4096 00:21:06.670 [2024-12-14 06:51:20.483455] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483459] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.483474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.483477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.670 [2024-12-14 06:51:20.483491] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:06.670 [2024-12-14 06:51:20.483497] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:06.670 [2024-12-14 06:51:20.483501] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:06.670 [2024-12-14 06:51:20.483507] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:06.670 [2024-12-14 06:51:20.483512] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:06.670 [2024-12-14 06:51:20.483517] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.670 [2024-12-14 06:51:20.483575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.670 [2024-12-14 06:51:20.483643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.483649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.483653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc6f30) on tqpair=0x1d68d30 00:21:06.670 [2024-12-14 06:51:20.483666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.670 [2024-12-14 06:51:20.483686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483693] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.670 [2024-12-14 06:51:20.483705] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.670 [2024-12-14 06:51:20.483723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.670 [2024-12-14 06:51:20.483740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:06.670 [2024-12-14 06:51:20.483761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.483776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.670 [2024-12-14 06:51:20.483797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6f30, cid 0, qid 0 00:21:06.670 [2024-12-14 06:51:20.483804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7090, cid 1, qid 0 00:21:06.670 [2024-12-14 06:51:20.483808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc71f0, cid 2, qid 0 00:21:06.670 [2024-12-14 06:51:20.483813] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.670 [2024-12-14 06:51:20.483817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc74b0, cid 4, qid 0 00:21:06.670 [2024-12-14 06:51:20.483919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.483926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.483929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc74b0) on tqpair=0x1d68d30 00:21:06.670 [2024-12-14 06:51:20.483940] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:06.670 [2024-12-14 06:51:20.483945] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:06.670 [2024-12-14 06:51:20.483988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.483993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.484013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d68d30) 00:21:06.670 [2024-12-14 06:51:20.484020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.670 [2024-12-14 06:51:20.484057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc74b0, cid 4, qid 0 00:21:06.670 [2024-12-14 06:51:20.484130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.670 [2024-12-14 06:51:20.484138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.670 [2024-12-14 06:51:20.484142] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.484146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d68d30): datao=0, datal=4096, cccid=4 00:21:06.670 [2024-12-14 06:51:20.484151] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc74b0) on tqpair(0x1d68d30): expected_datao=0, payload_size=4096 00:21:06.670 [2024-12-14 06:51:20.484159] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.484163] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.484172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.670 [2024-12-14 06:51:20.484178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.670 [2024-12-14 06:51:20.484182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.670 [2024-12-14 06:51:20.484186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc74b0) on tqpair=0x1d68d30 00:21:06.671 [2024-12-14 06:51:20.484201] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:06.671 [2024-12-14 06:51:20.484232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d68d30) 00:21:06.671 [2024-12-14 06:51:20.484251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.671 [2024-12-14 06:51:20.484259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484263] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d68d30) 00:21:06.671 [2024-12-14 06:51:20.484281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.671 [2024-12-14 06:51:20.484309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc74b0, cid 4, qid 0 00:21:06.671 [2024-12-14 06:51:20.484317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7610, cid 5, qid 0 00:21:06.671 [2024-12-14 06:51:20.484471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.671 [2024-12-14 06:51:20.484488] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.671 [2024-12-14 06:51:20.484493] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484496] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d68d30): datao=0, datal=1024, cccid=4 00:21:06.671 [2024-12-14 06:51:20.484501] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc74b0) on tqpair(0x1d68d30): expected_datao=0, payload_size=1024 00:21:06.671 [2024-12-14 06:51:20.484508] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484512] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.671 [2024-12-14 06:51:20.484524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.671 [2024-12-14 06:51:20.484527] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.484531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7610) on tqpair=0x1d68d30 00:21:06.671 [2024-12-14 06:51:20.525988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.671 [2024-12-14 06:51:20.526079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.671 [2024-12-14 06:51:20.526085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc74b0) on tqpair=0x1d68d30 00:21:06.671 [2024-12-14 06:51:20.526114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d68d30) 00:21:06.671 [2024-12-14 06:51:20.526133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.671 [2024-12-14 06:51:20.526168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc74b0, cid 4, qid 0 00:21:06.671 [2024-12-14 06:51:20.526268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.671 [2024-12-14 06:51:20.526277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.671 [2024-12-14 06:51:20.526281] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526285] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d68d30): datao=0, datal=3072, cccid=4 00:21:06.671 [2024-12-14 06:51:20.526290] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc74b0) on tqpair(0x1d68d30): expected_datao=0, payload_size=3072 00:21:06.671 [2024-12-14 06:51:20.526298] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526302] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.671 [2024-12-14 06:51:20.526317] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.671 [2024-12-14 06:51:20.526321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526325] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc74b0) on tqpair=0x1d68d30 00:21:06.671 [2024-12-14 06:51:20.526337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d68d30) 00:21:06.671 [2024-12-14 06:51:20.526368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.671 [2024-12-14 06:51:20.526409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc74b0, cid 4, qid 0 00:21:06.671 [2024-12-14 06:51:20.526481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.671 [2024-12-14 06:51:20.526495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.671 [2024-12-14 06:51:20.526501] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526504] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d68d30): datao=0, datal=8, cccid=4 00:21:06.671 [2024-12-14 06:51:20.526509] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc74b0) on tqpair(0x1d68d30): expected_datao=0, payload_size=8 00:21:06.671 [2024-12-14 06:51:20.526516] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.526520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.571095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.671 [2024-12-14 06:51:20.571119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.671 [2024-12-14 06:51:20.571142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.671 [2024-12-14 06:51:20.571146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc74b0) on tqpair=0x1d68d30 00:21:06.671 ===================================================== 00:21:06.671 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:06.671 ===================================================== 00:21:06.671 Controller Capabilities/Features 00:21:06.671 ================================ 00:21:06.671 Vendor ID: 0000 00:21:06.671 Subsystem Vendor ID: 0000 00:21:06.671 Serial Number: .................... 00:21:06.671 Model Number: ........................................ 00:21:06.671 Firmware Version: 24.01.1 00:21:06.671 Recommended Arb Burst: 0 00:21:06.671 IEEE OUI Identifier: 00 00 00 00:21:06.671 Multi-path I/O 00:21:06.671 May have multiple subsystem ports: No 00:21:06.671 May have multiple controllers: No 00:21:06.671 Associated with SR-IOV VF: No 00:21:06.671 Max Data Transfer Size: 131072 00:21:06.671 Max Number of Namespaces: 0 00:21:06.671 Max Number of I/O Queues: 1024 00:21:06.671 NVMe Specification Version (VS): 1.3 00:21:06.671 NVMe Specification Version (Identify): 1.3 00:21:06.671 Maximum Queue Entries: 128 00:21:06.671 Contiguous Queues Required: Yes 00:21:06.671 Arbitration Mechanisms Supported 00:21:06.671 Weighted Round Robin: Not Supported 00:21:06.671 Vendor Specific: Not Supported 00:21:06.671 Reset Timeout: 15000 ms 00:21:06.671 Doorbell Stride: 4 bytes 00:21:06.671 NVM Subsystem Reset: Not Supported 00:21:06.671 Command Sets Supported 00:21:06.671 NVM Command Set: Supported 00:21:06.671 Boot Partition: Not Supported 00:21:06.671 Memory Page Size Minimum: 4096 bytes 00:21:06.671 Memory Page Size Maximum: 4096 bytes 00:21:06.671 Persistent Memory Region: Not Supported 00:21:06.671 Optional Asynchronous Events Supported 00:21:06.671 Namespace Attribute Notices: Not Supported 00:21:06.671 Firmware Activation Notices: Not Supported 00:21:06.671 ANA Change Notices: Not Supported 00:21:06.671 PLE Aggregate Log Change Notices: Not Supported 00:21:06.671 LBA Status Info Alert Notices: Not Supported 00:21:06.671 EGE Aggregate Log Change Notices: Not Supported 00:21:06.671 Normal NVM Subsystem Shutdown event: Not Supported 00:21:06.671 Zone Descriptor Change Notices: Not Supported 00:21:06.671 Discovery Log Change Notices: Supported 00:21:06.671 Controller Attributes 00:21:06.671 128-bit Host Identifier: Not Supported 00:21:06.671 Non-Operational Permissive Mode: Not Supported 00:21:06.671 NVM Sets: Not Supported 00:21:06.671 Read Recovery Levels: Not Supported 00:21:06.671 Endurance Groups: Not Supported 00:21:06.671 Predictable Latency Mode: Not Supported 00:21:06.671 Traffic Based Keep ALive: Not Supported 00:21:06.671 Namespace Granularity: Not Supported 00:21:06.671 SQ Associations: Not Supported 00:21:06.671 UUID List: Not Supported 00:21:06.671 Multi-Domain Subsystem: Not Supported 00:21:06.671 Fixed Capacity Management: Not Supported 00:21:06.671 Variable Capacity Management: Not Supported 00:21:06.671 Delete Endurance Group: Not Supported 00:21:06.671 Delete NVM Set: Not Supported 00:21:06.671 Extended LBA Formats Supported: Not Supported 00:21:06.671 Flexible Data Placement Supported: Not Supported 00:21:06.671 00:21:06.671 Controller Memory Buffer Support 00:21:06.671 ================================ 00:21:06.671 Supported: No 00:21:06.671 00:21:06.671 Persistent Memory Region Support 00:21:06.671 ================================ 00:21:06.671 Supported: No 00:21:06.671 00:21:06.671 Admin Command Set Attributes 00:21:06.671 ============================ 00:21:06.671 Security Send/Receive: Not Supported 00:21:06.671 Format NVM: Not Supported 00:21:06.671 Firmware Activate/Download: Not Supported 00:21:06.671 Namespace Management: Not Supported 00:21:06.671 Device Self-Test: Not Supported 00:21:06.671 Directives: Not Supported 00:21:06.671 NVMe-MI: Not Supported 00:21:06.671 Virtualization Management: Not Supported 00:21:06.671 Doorbell Buffer Config: Not Supported 00:21:06.671 Get LBA Status Capability: Not Supported 00:21:06.671 Command & Feature Lockdown Capability: Not Supported 00:21:06.671 Abort Command Limit: 1 00:21:06.671 Async Event Request Limit: 4 00:21:06.671 Number of Firmware Slots: N/A 00:21:06.671 Firmware Slot 1 Read-Only: N/A 00:21:06.671 Firmware Activation Without Reset: N/A 00:21:06.671 Multiple Update Detection Support: N/A 00:21:06.671 Firmware Update Granularity: No Information Provided 00:21:06.672 Per-Namespace SMART Log: No 00:21:06.672 Asymmetric Namespace Access Log Page: Not Supported 00:21:06.672 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:06.672 Command Effects Log Page: Not Supported 00:21:06.672 Get Log Page Extended Data: Supported 00:21:06.672 Telemetry Log Pages: Not Supported 00:21:06.672 Persistent Event Log Pages: Not Supported 00:21:06.672 Supported Log Pages Log Page: May Support 00:21:06.672 Commands Supported & Effects Log Page: Not Supported 00:21:06.672 Feature Identifiers & Effects Log Page:May Support 00:21:06.672 NVMe-MI Commands & Effects Log Page: May Support 00:21:06.672 Data Area 4 for Telemetry Log: Not Supported 00:21:06.672 Error Log Page Entries Supported: 128 00:21:06.672 Keep Alive: Not Supported 00:21:06.672 00:21:06.672 NVM Command Set Attributes 00:21:06.672 ========================== 00:21:06.672 Submission Queue Entry Size 00:21:06.672 Max: 1 00:21:06.672 Min: 1 00:21:06.672 Completion Queue Entry Size 00:21:06.672 Max: 1 00:21:06.672 Min: 1 00:21:06.672 Number of Namespaces: 0 00:21:06.672 Compare Command: Not Supported 00:21:06.672 Write Uncorrectable Command: Not Supported 00:21:06.672 Dataset Management Command: Not Supported 00:21:06.672 Write Zeroes Command: Not Supported 00:21:06.672 Set Features Save Field: Not Supported 00:21:06.672 Reservations: Not Supported 00:21:06.672 Timestamp: Not Supported 00:21:06.672 Copy: Not Supported 00:21:06.672 Volatile Write Cache: Not Present 00:21:06.672 Atomic Write Unit (Normal): 1 00:21:06.672 Atomic Write Unit (PFail): 1 00:21:06.672 Atomic Compare & Write Unit: 1 00:21:06.672 Fused Compare & Write: Supported 00:21:06.672 Scatter-Gather List 00:21:06.672 SGL Command Set: Supported 00:21:06.672 SGL Keyed: Supported 00:21:06.672 SGL Bit Bucket Descriptor: Not Supported 00:21:06.672 SGL Metadata Pointer: Not Supported 00:21:06.672 Oversized SGL: Not Supported 00:21:06.672 SGL Metadata Address: Not Supported 00:21:06.672 SGL Offset: Supported 00:21:06.672 Transport SGL Data Block: Not Supported 00:21:06.672 Replay Protected Memory Block: Not Supported 00:21:06.672 00:21:06.672 Firmware Slot Information 00:21:06.672 ========================= 00:21:06.672 Active slot: 0 00:21:06.672 00:21:06.672 00:21:06.672 Error Log 00:21:06.672 ========= 00:21:06.672 00:21:06.672 Active Namespaces 00:21:06.672 ================= 00:21:06.672 Discovery Log Page 00:21:06.672 ================== 00:21:06.672 Generation Counter: 2 00:21:06.672 Number of Records: 2 00:21:06.672 Record Format: 0 00:21:06.672 00:21:06.672 Discovery Log Entry 0 00:21:06.672 ---------------------- 00:21:06.672 Transport Type: 3 (TCP) 00:21:06.672 Address Family: 1 (IPv4) 00:21:06.672 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:06.672 Entry Flags: 00:21:06.672 Duplicate Returned Information: 1 00:21:06.672 Explicit Persistent Connection Support for Discovery: 1 00:21:06.672 Transport Requirements: 00:21:06.672 Secure Channel: Not Required 00:21:06.672 Port ID: 0 (0x0000) 00:21:06.672 Controller ID: 65535 (0xffff) 00:21:06.672 Admin Max SQ Size: 128 00:21:06.672 Transport Service Identifier: 4420 00:21:06.672 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:06.672 Transport Address: 10.0.0.2 00:21:06.672 Discovery Log Entry 1 00:21:06.672 ---------------------- 00:21:06.672 Transport Type: 3 (TCP) 00:21:06.672 Address Family: 1 (IPv4) 00:21:06.672 Subsystem Type: 2 (NVM Subsystem) 00:21:06.672 Entry Flags: 00:21:06.672 Duplicate Returned Information: 0 00:21:06.672 Explicit Persistent Connection Support for Discovery: 0 00:21:06.672 Transport Requirements: 00:21:06.672 Secure Channel: Not Required 00:21:06.672 Port ID: 0 (0x0000) 00:21:06.672 Controller ID: 65535 (0xffff) 00:21:06.672 Admin Max SQ Size: 128 00:21:06.672 Transport Service Identifier: 4420 00:21:06.672 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:06.672 Transport Address: 10.0.0.2 [2024-12-14 06:51:20.571264] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:06.672 [2024-12-14 06:51:20.571305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.672 [2024-12-14 06:51:20.571313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.672 [2024-12-14 06:51:20.571319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.672 [2024-12-14 06:51:20.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.672 [2024-12-14 06:51:20.571336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.571353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.571406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.571497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.672 [2024-12-14 06:51:20.571504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.672 [2024-12-14 06:51:20.571508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571511] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.672 [2024-12-14 06:51:20.571520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.571550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.571573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.571641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.672 [2024-12-14 06:51:20.571647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.672 [2024-12-14 06:51:20.571651] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.672 [2024-12-14 06:51:20.571661] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:06.672 [2024-12-14 06:51:20.571666] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:06.672 [2024-12-14 06:51:20.571676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.571690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.571709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.571763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.672 [2024-12-14 06:51:20.571778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.672 [2024-12-14 06:51:20.571783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.672 [2024-12-14 06:51:20.571799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.571814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.571834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.571884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.672 [2024-12-14 06:51:20.571898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.672 [2024-12-14 06:51:20.571902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.672 [2024-12-14 06:51:20.571918] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.571926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.571933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.571975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.572058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.672 [2024-12-14 06:51:20.572072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.672 [2024-12-14 06:51:20.572078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.572082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.672 [2024-12-14 06:51:20.572094] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.572098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.672 [2024-12-14 06:51:20.572117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.672 [2024-12-14 06:51:20.572125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.672 [2024-12-14 06:51:20.572144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.672 [2024-12-14 06:51:20.572202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572370] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572507] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572579] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572662] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572758] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572762] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.572867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.572871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.572885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.572893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.572900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.572917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.572988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.573012] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.573016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.573032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.573062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.573096] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.573161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.573168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.573172] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.573187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573191] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.573202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.573220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.573304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.573311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.573315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.573330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.573379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.573421] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.573473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.573479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.573483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.673 [2024-12-14 06:51:20.573497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.673 [2024-12-14 06:51:20.573512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.673 [2024-12-14 06:51:20.573529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.673 [2024-12-14 06:51:20.573583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.673 [2024-12-14 06:51:20.573590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.673 [2024-12-14 06:51:20.573594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.673 [2024-12-14 06:51:20.573597] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.573608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.573622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.573639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.573697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.573704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.573707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.573721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.573736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.573753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.573811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.573817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.573821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.573835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.573850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.573866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.573921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.573927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.573931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573935] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.573945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.573953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.573960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.573976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574107] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574252] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574515] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574519] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574626] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574751] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.574851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.574858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.574861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.574876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.574883] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.574890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.574907] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.578085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.578104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.578124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.578129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.578144] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.578148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.578152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d68d30) 00:21:06.674 [2024-12-14 06:51:20.578160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.674 [2024-12-14 06:51:20.578185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc7350, cid 3, qid 0 00:21:06.674 [2024-12-14 06:51:20.578288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.674 [2024-12-14 06:51:20.578295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.674 [2024-12-14 06:51:20.578299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.674 [2024-12-14 06:51:20.578303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dc7350) on tqpair=0x1d68d30 00:21:06.674 [2024-12-14 06:51:20.578312] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:06.675 00:21:06.675 06:51:20 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:06.675 [2024-12-14 06:51:20.613491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:06.675 [2024-12-14 06:51:20.613543] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83078 ] 00:21:06.938 [2024-12-14 06:51:20.753156] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:06.938 [2024-12-14 06:51:20.753234] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:06.938 [2024-12-14 06:51:20.753241] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:06.938 [2024-12-14 06:51:20.753253] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:06.938 [2024-12-14 06:51:20.753262] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:06.938 [2024-12-14 06:51:20.753411] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:06.938 [2024-12-14 06:51:20.753460] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x242dd30 0 00:21:06.938 [2024-12-14 06:51:20.759081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:06.938 [2024-12-14 06:51:20.759106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:06.938 [2024-12-14 06:51:20.759128] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:06.938 [2024-12-14 06:51:20.759131] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:06.938 [2024-12-14 06:51:20.759177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.759184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.759188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.759199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:06.938 [2024-12-14 06:51:20.759230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.766995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767049] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767066] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:06.938 [2024-12-14 06:51:20.767073] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:06.938 [2024-12-14 06:51:20.767079] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:06.938 [2024-12-14 06:51:20.767093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.767206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767241] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:06.938 [2024-12-14 06:51:20.767275] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:06.938 [2024-12-14 06:51:20.767283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767290] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.767383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767389] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767393] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767403] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:06.938 [2024-12-14 06:51:20.767411] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767418] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767422] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767426] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.767509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767547] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.767628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767647] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:06.938 [2024-12-14 06:51:20.767652] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767660] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767766] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:06.938 [2024-12-14 06:51:20.767779] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.767898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.767905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.767908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.767918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:06.938 [2024-12-14 06:51:20.767927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.767935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.767942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.767988] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.768058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.938 [2024-12-14 06:51:20.768072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.938 [2024-12-14 06:51:20.768077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.768081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.938 [2024-12-14 06:51:20.768086] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:06.938 [2024-12-14 06:51:20.768092] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:06.938 [2024-12-14 06:51:20.768100] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:06.938 [2024-12-14 06:51:20.768117] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:06.938 [2024-12-14 06:51:20.768127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.768132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.938 [2024-12-14 06:51:20.768135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.938 [2024-12-14 06:51:20.768143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.938 [2024-12-14 06:51:20.768165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.938 [2024-12-14 06:51:20.768281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.939 [2024-12-14 06:51:20.768288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.939 [2024-12-14 06:51:20.768292] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768296] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=4096, cccid=0 00:21:06.939 [2024-12-14 06:51:20.768300] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248bf30) on tqpair(0x242dd30): expected_datao=0, payload_size=4096 00:21:06.939 [2024-12-14 06:51:20.768309] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768313] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.939 [2024-12-14 06:51:20.768343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.939 [2024-12-14 06:51:20.768347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.939 [2024-12-14 06:51:20.768385] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:06.939 [2024-12-14 06:51:20.768391] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:06.939 [2024-12-14 06:51:20.768396] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:06.939 [2024-12-14 06:51:20.768400] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:06.939 [2024-12-14 06:51:20.768405] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:06.939 [2024-12-14 06:51:20.768410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768434] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.939 [2024-12-14 06:51:20.768480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.939 [2024-12-14 06:51:20.768543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.939 [2024-12-14 06:51:20.768550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.939 [2024-12-14 06:51:20.768553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248bf30) on tqpair=0x242dd30 00:21:06.939 [2024-12-14 06:51:20.768565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768569] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.939 [2024-12-14 06:51:20.768586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.939 [2024-12-14 06:51:20.768606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.939 [2024-12-14 06:51:20.768625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.939 [2024-12-14 06:51:20.768643] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.939 [2024-12-14 06:51:20.768699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248bf30, cid 0, qid 0 00:21:06.939 [2024-12-14 06:51:20.768706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c090, cid 1, qid 0 00:21:06.939 [2024-12-14 06:51:20.768711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c1f0, cid 2, qid 0 00:21:06.939 [2024-12-14 06:51:20.768716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.939 [2024-12-14 06:51:20.768721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.939 [2024-12-14 06:51:20.768818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.939 [2024-12-14 06:51:20.768825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.939 [2024-12-14 06:51:20.768828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.939 [2024-12-14 06:51:20.768838] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:06.939 [2024-12-14 06:51:20.768844] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768864] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.768872] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768876] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.768887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.939 [2024-12-14 06:51:20.768907] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.939 [2024-12-14 06:51:20.768968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.939 [2024-12-14 06:51:20.768975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.939 [2024-12-14 06:51:20.768979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.768983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.939 [2024-12-14 06:51:20.769068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.769082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.769091] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.769106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.939 [2024-12-14 06:51:20.769129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.939 [2024-12-14 06:51:20.769204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.939 [2024-12-14 06:51:20.769211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.939 [2024-12-14 06:51:20.769215] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769219] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=4096, cccid=4 00:21:06.939 [2024-12-14 06:51:20.769224] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c4b0) on tqpair(0x242dd30): expected_datao=0, payload_size=4096 00:21:06.939 [2024-12-14 06:51:20.769232] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769236] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769245] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.939 [2024-12-14 06:51:20.769251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.939 [2024-12-14 06:51:20.769254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.939 [2024-12-14 06:51:20.769278] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:06.939 [2024-12-14 06:51:20.769290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.769301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:06.939 [2024-12-14 06:51:20.769309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769313] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.939 [2024-12-14 06:51:20.769324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.939 [2024-12-14 06:51:20.769345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.939 [2024-12-14 06:51:20.769436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.939 [2024-12-14 06:51:20.769443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.939 [2024-12-14 06:51:20.769447] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.939 [2024-12-14 06:51:20.769451] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=4096, cccid=4 00:21:06.939 [2024-12-14 06:51:20.769455] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c4b0) on tqpair(0x242dd30): expected_datao=0, payload_size=4096 00:21:06.939 [2024-12-14 06:51:20.769463] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769467] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.769482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.769486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.769508] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.769545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.769565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.940 [2024-12-14 06:51:20.769633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.940 [2024-12-14 06:51:20.769645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.940 [2024-12-14 06:51:20.769650] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769654] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=4096, cccid=4 00:21:06.940 [2024-12-14 06:51:20.769658] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c4b0) on tqpair(0x242dd30): expected_datao=0, payload_size=4096 00:21:06.940 [2024-12-14 06:51:20.769666] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769670] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.769685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.769689] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769693] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.769703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769727] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769746] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769751] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769757] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:06.940 [2024-12-14 06:51:20.769761] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:06.940 [2024-12-14 06:51:20.769766] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:06.940 [2024-12-14 06:51:20.769781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.769813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.769820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.769833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.940 [2024-12-14 06:51:20.769861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.940 [2024-12-14 06:51:20.769869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c610, cid 5, qid 0 00:21:06.940 [2024-12-14 06:51:20.769943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.769950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.769953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.769957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.770000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.770053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.770057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c610) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.770075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c610, cid 5, qid 0 00:21:06.940 [2024-12-14 06:51:20.770189] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.770201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.770206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c610) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.770222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c610, cid 5, qid 0 00:21:06.940 [2024-12-14 06:51:20.770357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.770368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.770372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c610) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.770387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770391] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c610, cid 5, qid 0 00:21:06.940 [2024-12-14 06:51:20.770495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.940 [2024-12-14 06:51:20.770502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.940 [2024-12-14 06:51:20.770506] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c610) on tqpair=0x242dd30 00:21:06.940 [2024-12-14 06:51:20.770525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770573] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x242dd30) 00:21:06.940 [2024-12-14 06:51:20.770605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.940 [2024-12-14 06:51:20.770626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c610, cid 5, qid 0 00:21:06.940 [2024-12-14 06:51:20.770633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c4b0, cid 4, qid 0 00:21:06.940 [2024-12-14 06:51:20.770638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c770, cid 6, qid 0 00:21:06.940 [2024-12-14 06:51:20.770642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c8d0, cid 7, qid 0 00:21:06.940 [2024-12-14 06:51:20.770793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.940 [2024-12-14 06:51:20.770801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.940 [2024-12-14 06:51:20.770805] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.940 [2024-12-14 06:51:20.770809] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=8192, cccid=5 00:21:06.940 [2024-12-14 06:51:20.770814] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c610) on tqpair(0x242dd30): expected_datao=0, payload_size=8192 00:21:06.940 [2024-12-14 06:51:20.770831] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770836] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.941 [2024-12-14 06:51:20.770849] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.941 [2024-12-14 06:51:20.770852] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770856] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=512, cccid=4 00:21:06.941 [2024-12-14 06:51:20.770860] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c4b0) on tqpair(0x242dd30): expected_datao=0, payload_size=512 00:21:06.941 [2024-12-14 06:51:20.770867] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770871] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.941 [2024-12-14 06:51:20.770882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.941 [2024-12-14 06:51:20.770886] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770889] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=512, cccid=6 00:21:06.941 [2024-12-14 06:51:20.770894] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c770) on tqpair(0x242dd30): expected_datao=0, payload_size=512 00:21:06.941 [2024-12-14 06:51:20.770900] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770904] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.941 [2024-12-14 06:51:20.770915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.941 [2024-12-14 06:51:20.770919] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770922] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242dd30): datao=0, datal=4096, cccid=7 00:21:06.941 [2024-12-14 06:51:20.770927] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248c8d0) on tqpair(0x242dd30): expected_datao=0, payload_size=4096 00:21:06.941 [2024-12-14 06:51:20.770934] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770937] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.770946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.941 [2024-12-14 06:51:20.770952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.941 [2024-12-14 06:51:20.770955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.775053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c610) on tqpair=0x242dd30 00:21:06.941 [2024-12-14 06:51:20.775079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.941 [2024-12-14 06:51:20.775087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.941 [2024-12-14 06:51:20.775091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.775094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c4b0) on tqpair=0x242dd30 00:21:06.941 ===================================================== 00:21:06.941 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.941 ===================================================== 00:21:06.941 Controller Capabilities/Features 00:21:06.941 ================================ 00:21:06.941 Vendor ID: 8086 00:21:06.941 Subsystem Vendor ID: 8086 00:21:06.941 Serial Number: SPDK00000000000001 00:21:06.941 Model Number: SPDK bdev Controller 00:21:06.941 Firmware Version: 24.01.1 00:21:06.941 Recommended Arb Burst: 6 00:21:06.941 IEEE OUI Identifier: e4 d2 5c 00:21:06.941 Multi-path I/O 00:21:06.941 May have multiple subsystem ports: Yes 00:21:06.941 May have multiple controllers: Yes 00:21:06.941 Associated with SR-IOV VF: No 00:21:06.941 Max Data Transfer Size: 131072 00:21:06.941 Max Number of Namespaces: 32 00:21:06.941 Max Number of I/O Queues: 127 00:21:06.941 NVMe Specification Version (VS): 1.3 00:21:06.941 NVMe Specification Version (Identify): 1.3 00:21:06.941 Maximum Queue Entries: 128 00:21:06.941 Contiguous Queues Required: Yes 00:21:06.941 Arbitration Mechanisms Supported 00:21:06.941 Weighted Round Robin: Not Supported 00:21:06.941 Vendor Specific: Not Supported 00:21:06.941 Reset Timeout: 15000 ms 00:21:06.941 Doorbell Stride: 4 bytes 00:21:06.941 NVM Subsystem Reset: Not Supported 00:21:06.941 Command Sets Supported 00:21:06.941 NVM Command Set: Supported 00:21:06.941 Boot Partition: Not Supported 00:21:06.941 Memory Page Size Minimum: 4096 bytes 00:21:06.941 Memory Page Size Maximum: 4096 bytes 00:21:06.941 Persistent Memory Region: Not Supported 00:21:06.941 Optional Asynchronous Events Supported 00:21:06.941 Namespace Attribute Notices: Supported 00:21:06.941 Firmware Activation Notices: Not Supported 00:21:06.941 ANA Change Notices: Not Supported 00:21:06.941 PLE Aggregate Log Change Notices: Not Supported 00:21:06.941 LBA Status Info Alert Notices: Not Supported 00:21:06.941 EGE Aggregate Log Change Notices: Not Supported 00:21:06.941 Normal NVM Subsystem Shutdown event: Not Supported 00:21:06.941 Zone Descriptor Change Notices: Not Supported 00:21:06.941 Discovery Log Change Notices: Not Supported 00:21:06.941 Controller Attributes 00:21:06.941 128-bit Host Identifier: Supported 00:21:06.941 Non-Operational Permissive Mode: Not Supported 00:21:06.941 NVM Sets: Not Supported 00:21:06.941 Read Recovery Levels: Not Supported 00:21:06.941 Endurance Groups: Not Supported 00:21:06.941 Predictable Latency Mode: Not Supported 00:21:06.941 Traffic Based Keep ALive: Not Supported 00:21:06.941 Namespace Granularity: Not Supported 00:21:06.941 SQ Associations: Not Supported 00:21:06.941 UUID List: Not Supported 00:21:06.941 Multi-Domain Subsystem: Not Supported 00:21:06.941 Fixed Capacity Management: Not Supported 00:21:06.941 Variable Capacity Management: Not Supported 00:21:06.941 Delete Endurance Group: Not Supported 00:21:06.941 Delete NVM Set: Not Supported 00:21:06.941 Extended LBA Formats Supported: Not Supported 00:21:06.941 Flexible Data Placement Supported: Not Supported 00:21:06.941 00:21:06.941 Controller Memory Buffer Support 00:21:06.941 ================================ 00:21:06.941 Supported: No 00:21:06.941 00:21:06.941 Persistent Memory Region Support 00:21:06.941 ================================ 00:21:06.941 Supported: No 00:21:06.941 00:21:06.941 Admin Command Set Attributes 00:21:06.941 ============================ 00:21:06.941 Security Send/Receive: Not Supported 00:21:06.941 Format NVM: Not Supported 00:21:06.941 Firmware Activate/Download: Not Supported 00:21:06.941 Namespace Management: Not Supported 00:21:06.941 Device Self-Test: Not Supported 00:21:06.941 Directives: Not Supported 00:21:06.941 NVMe-MI: Not Supported 00:21:06.941 Virtualization Management: Not Supported 00:21:06.941 Doorbell Buffer Config: Not Supported 00:21:06.941 Get LBA Status Capability: Not Supported 00:21:06.941 Command & Feature Lockdown Capability: Not Supported 00:21:06.941 Abort Command Limit: 4 00:21:06.941 Async Event Request Limit: 4 00:21:06.941 Number of Firmware Slots: N/A 00:21:06.941 Firmware Slot 1 Read-Only: N/A 00:21:06.941 Firmware Activation Without Reset: [2024-12-14 06:51:20.775118] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.941 [2024-12-14 06:51:20.775124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.941 [2024-12-14 06:51:20.775127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.775131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c770) on tqpair=0x242dd30 00:21:06.941 [2024-12-14 06:51:20.775138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.941 [2024-12-14 06:51:20.775144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.941 [2024-12-14 06:51:20.775147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.941 [2024-12-14 06:51:20.775150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c8d0) on tqpair=0x242dd30 00:21:06.941 N/A 00:21:06.941 Multiple Update Detection Support: N/A 00:21:06.941 Firmware Update Granularity: No Information Provided 00:21:06.941 Per-Namespace SMART Log: No 00:21:06.941 Asymmetric Namespace Access Log Page: Not Supported 00:21:06.941 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:06.941 Command Effects Log Page: Supported 00:21:06.941 Get Log Page Extended Data: Supported 00:21:06.941 Telemetry Log Pages: Not Supported 00:21:06.941 Persistent Event Log Pages: Not Supported 00:21:06.941 Supported Log Pages Log Page: May Support 00:21:06.941 Commands Supported & Effects Log Page: Not Supported 00:21:06.941 Feature Identifiers & Effects Log Page:May Support 00:21:06.941 NVMe-MI Commands & Effects Log Page: May Support 00:21:06.941 Data Area 4 for Telemetry Log: Not Supported 00:21:06.941 Error Log Page Entries Supported: 128 00:21:06.941 Keep Alive: Supported 00:21:06.941 Keep Alive Granularity: 10000 ms 00:21:06.941 00:21:06.941 NVM Command Set Attributes 00:21:06.941 ========================== 00:21:06.941 Submission Queue Entry Size 00:21:06.941 Max: 64 00:21:06.941 Min: 64 00:21:06.941 Completion Queue Entry Size 00:21:06.941 Max: 16 00:21:06.941 Min: 16 00:21:06.941 Number of Namespaces: 32 00:21:06.941 Compare Command: Supported 00:21:06.941 Write Uncorrectable Command: Not Supported 00:21:06.941 Dataset Management Command: Supported 00:21:06.941 Write Zeroes Command: Supported 00:21:06.941 Set Features Save Field: Not Supported 00:21:06.941 Reservations: Supported 00:21:06.941 Timestamp: Not Supported 00:21:06.941 Copy: Supported 00:21:06.941 Volatile Write Cache: Present 00:21:06.941 Atomic Write Unit (Normal): 1 00:21:06.941 Atomic Write Unit (PFail): 1 00:21:06.941 Atomic Compare & Write Unit: 1 00:21:06.941 Fused Compare & Write: Supported 00:21:06.941 Scatter-Gather List 00:21:06.941 SGL Command Set: Supported 00:21:06.941 SGL Keyed: Supported 00:21:06.941 SGL Bit Bucket Descriptor: Not Supported 00:21:06.941 SGL Metadata Pointer: Not Supported 00:21:06.941 Oversized SGL: Not Supported 00:21:06.942 SGL Metadata Address: Not Supported 00:21:06.942 SGL Offset: Supported 00:21:06.942 Transport SGL Data Block: Not Supported 00:21:06.942 Replay Protected Memory Block: Not Supported 00:21:06.942 00:21:06.942 Firmware Slot Information 00:21:06.942 ========================= 00:21:06.942 Active slot: 1 00:21:06.942 Slot 1 Firmware Revision: 24.01.1 00:21:06.942 00:21:06.942 00:21:06.942 Commands Supported and Effects 00:21:06.942 ============================== 00:21:06.942 Admin Commands 00:21:06.942 -------------- 00:21:06.942 Get Log Page (02h): Supported 00:21:06.942 Identify (06h): Supported 00:21:06.942 Abort (08h): Supported 00:21:06.942 Set Features (09h): Supported 00:21:06.942 Get Features (0Ah): Supported 00:21:06.942 Asynchronous Event Request (0Ch): Supported 00:21:06.942 Keep Alive (18h): Supported 00:21:06.942 I/O Commands 00:21:06.942 ------------ 00:21:06.942 Flush (00h): Supported LBA-Change 00:21:06.942 Write (01h): Supported LBA-Change 00:21:06.942 Read (02h): Supported 00:21:06.942 Compare (05h): Supported 00:21:06.942 Write Zeroes (08h): Supported LBA-Change 00:21:06.942 Dataset Management (09h): Supported LBA-Change 00:21:06.942 Copy (19h): Supported LBA-Change 00:21:06.942 Unknown (79h): Supported LBA-Change 00:21:06.942 Unknown (7Ah): Supported 00:21:06.942 00:21:06.942 Error Log 00:21:06.942 ========= 00:21:06.942 00:21:06.942 Arbitration 00:21:06.942 =========== 00:21:06.942 Arbitration Burst: 1 00:21:06.942 00:21:06.942 Power Management 00:21:06.942 ================ 00:21:06.942 Number of Power States: 1 00:21:06.942 Current Power State: Power State #0 00:21:06.942 Power State #0: 00:21:06.942 Max Power: 0.00 W 00:21:06.942 Non-Operational State: Operational 00:21:06.942 Entry Latency: Not Reported 00:21:06.942 Exit Latency: Not Reported 00:21:06.942 Relative Read Throughput: 0 00:21:06.942 Relative Read Latency: 0 00:21:06.942 Relative Write Throughput: 0 00:21:06.942 Relative Write Latency: 0 00:21:06.942 Idle Power: Not Reported 00:21:06.942 Active Power: Not Reported 00:21:06.942 Non-Operational Permissive Mode: Not Supported 00:21:06.942 00:21:06.942 Health Information 00:21:06.942 ================== 00:21:06.942 Critical Warnings: 00:21:06.942 Available Spare Space: OK 00:21:06.942 Temperature: OK 00:21:06.942 Device Reliability: OK 00:21:06.942 Read Only: No 00:21:06.942 Volatile Memory Backup: OK 00:21:06.942 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:06.942 Temperature Threshold: [2024-12-14 06:51:20.775297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.775317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.775346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c8d0, cid 7, qid 0 00:21:06.942 [2024-12-14 06:51:20.775435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.775452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.775455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c8d0) on tqpair=0x242dd30 00:21:06.942 [2024-12-14 06:51:20.775507] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:06.942 [2024-12-14 06:51:20.775521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.942 [2024-12-14 06:51:20.775528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.942 [2024-12-14 06:51:20.775534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.942 [2024-12-14 06:51:20.775540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.942 [2024-12-14 06:51:20.775549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.775564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.775587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.942 [2024-12-14 06:51:20.775643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.775650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.775653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.942 [2024-12-14 06:51:20.775665] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.775680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.775702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.942 [2024-12-14 06:51:20.775776] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.775782] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.775785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.942 [2024-12-14 06:51:20.775795] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:06.942 [2024-12-14 06:51:20.775799] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:06.942 [2024-12-14 06:51:20.775809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.775824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.775841] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.942 [2024-12-14 06:51:20.775897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.775904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.775907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.942 [2024-12-14 06:51:20.775922] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.775930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.775937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.775970] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.942 [2024-12-14 06:51:20.776063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.776070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.776074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.776078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.942 [2024-12-14 06:51:20.776089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.776094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.942 [2024-12-14 06:51:20.776098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.942 [2024-12-14 06:51:20.776105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.942 [2024-12-14 06:51:20.776125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.942 [2024-12-14 06:51:20.776181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.942 [2024-12-14 06:51:20.776188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.942 [2024-12-14 06:51:20.776191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776211] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776314] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776318] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776685] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.776893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.776904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.776908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.776924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.776932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.776940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.776989] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777058] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777062] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.777090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.777109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777166] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777173] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777181] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777192] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.777208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.777226] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.777338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.777370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.777466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.777483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.943 [2024-12-14 06:51:20.777585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.943 [2024-12-14 06:51:20.777603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.943 [2024-12-14 06:51:20.777656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.943 [2024-12-14 06:51:20.777680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.943 [2024-12-14 06:51:20.777684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.943 [2024-12-14 06:51:20.777688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.943 [2024-12-14 06:51:20.777700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.777716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.777749] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.777802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.777812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.777816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777820] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.777831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777839] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.777846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.777863] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.777918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.777929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.777933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.777971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.777980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.777987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778174] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778368] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778499] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778706] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778739] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.778830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.778834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778838] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.778849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.778857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.778864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.778883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.778939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.782991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.783026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.783031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.783048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.783054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.783057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242dd30) 00:21:06.944 [2024-12-14 06:51:20.783066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.944 [2024-12-14 06:51:20.783104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248c350, cid 3, qid 0 00:21:06.944 [2024-12-14 06:51:20.783175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.944 [2024-12-14 06:51:20.783182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.944 [2024-12-14 06:51:20.783186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.944 [2024-12-14 06:51:20.783190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x248c350) on tqpair=0x242dd30 00:21:06.944 [2024-12-14 06:51:20.783199] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:06.944 0 Kelvin (-273 Celsius) 00:21:06.944 Available Spare: 0% 00:21:06.944 Available Spare Threshold: 0% 00:21:06.944 Life Percentage Used: 0% 00:21:06.944 Data Units Read: 0 00:21:06.944 Data Units Written: 0 00:21:06.944 Host Read Commands: 0 00:21:06.944 Host Write Commands: 0 00:21:06.944 Controller Busy Time: 0 minutes 00:21:06.944 Power Cycles: 0 00:21:06.944 Power On Hours: 0 hours 00:21:06.944 Unsafe Shutdowns: 0 00:21:06.944 Unrecoverable Media Errors: 0 00:21:06.944 Lifetime Error Log Entries: 0 00:21:06.944 Warning Temperature Time: 0 minutes 00:21:06.944 Critical Temperature Time: 0 minutes 00:21:06.944 00:21:06.944 Number of Queues 00:21:06.944 ================ 00:21:06.944 Number of I/O Submission Queues: 127 00:21:06.944 Number of I/O Completion Queues: 127 00:21:06.944 00:21:06.944 Active Namespaces 00:21:06.944 ================= 00:21:06.945 Namespace ID:1 00:21:06.945 Error Recovery Timeout: Unlimited 00:21:06.945 Command Set Identifier: NVM (00h) 00:21:06.945 Deallocate: Supported 00:21:06.945 Deallocated/Unwritten Error: Not Supported 00:21:06.945 Deallocated Read Value: Unknown 00:21:06.945 Deallocate in Write Zeroes: Not Supported 00:21:06.945 Deallocated Guard Field: 0xFFFF 00:21:06.945 Flush: Supported 00:21:06.945 Reservation: Supported 00:21:06.945 Namespace Sharing Capabilities: Multiple Controllers 00:21:06.945 Size (in LBAs): 131072 (0GiB) 00:21:06.945 Capacity (in LBAs): 131072 (0GiB) 00:21:06.945 Utilization (in LBAs): 131072 (0GiB) 00:21:06.945 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:06.945 EUI64: ABCDEF0123456789 00:21:06.945 UUID: ae34f6c0-f467-4057-982f-b2228a1618a3 00:21:06.945 Thin Provisioning: Not Supported 00:21:06.945 Per-NS Atomic Units: Yes 00:21:06.945 Atomic Boundary Size (Normal): 0 00:21:06.945 Atomic Boundary Size (PFail): 0 00:21:06.945 Atomic Boundary Offset: 0 00:21:06.945 Maximum Single Source Range Length: 65535 00:21:06.945 Maximum Copy Length: 65535 00:21:06.945 Maximum Source Range Count: 1 00:21:06.945 NGUID/EUI64 Never Reused: No 00:21:06.945 Namespace Write Protected: No 00:21:06.945 Number of LBA Formats: 1 00:21:06.945 Current LBA Format: LBA Format #00 00:21:06.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:06.945 00:21:06.945 06:51:20 -- host/identify.sh@51 -- # sync 00:21:06.945 06:51:20 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.945 06:51:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.945 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.945 06:51:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.945 06:51:20 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:06.945 06:51:20 -- host/identify.sh@56 -- # nvmftestfini 00:21:06.945 06:51:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:06.945 06:51:20 -- nvmf/common.sh@116 -- # sync 00:21:06.945 06:51:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:06.945 06:51:20 -- nvmf/common.sh@119 -- # set +e 00:21:06.945 06:51:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:06.945 06:51:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:06.945 rmmod nvme_tcp 00:21:06.945 rmmod nvme_fabrics 00:21:06.945 rmmod nvme_keyring 00:21:06.945 06:51:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:06.945 06:51:20 -- nvmf/common.sh@123 -- # set -e 00:21:06.945 06:51:20 -- nvmf/common.sh@124 -- # return 0 00:21:06.945 06:51:20 -- nvmf/common.sh@477 -- # '[' -n 83016 ']' 00:21:06.945 06:51:20 -- nvmf/common.sh@478 -- # killprocess 83016 00:21:06.945 06:51:20 -- common/autotest_common.sh@936 -- # '[' -z 83016 ']' 00:21:06.945 06:51:20 -- common/autotest_common.sh@940 -- # kill -0 83016 00:21:07.204 06:51:20 -- common/autotest_common.sh@941 -- # uname 00:21:07.204 06:51:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.204 06:51:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83016 00:21:07.204 06:51:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:07.204 06:51:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:07.204 killing process with pid 83016 00:21:07.204 06:51:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83016' 00:21:07.204 06:51:20 -- common/autotest_common.sh@955 -- # kill 83016 00:21:07.204 [2024-12-14 06:51:20.960619] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:07.204 06:51:20 -- common/autotest_common.sh@960 -- # wait 83016 00:21:07.463 06:51:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:07.463 06:51:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:07.463 06:51:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:07.463 06:51:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.463 06:51:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:07.463 06:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.463 06:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.463 06:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.463 06:51:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:07.463 00:21:07.463 real 0m2.838s 00:21:07.463 user 0m7.923s 00:21:07.463 sys 0m0.754s 00:21:07.463 06:51:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:07.463 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.463 ************************************ 00:21:07.463 END TEST nvmf_identify 00:21:07.463 ************************************ 00:21:07.463 06:51:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:07.463 06:51:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.463 06:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.463 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.463 ************************************ 00:21:07.463 START TEST nvmf_perf 00:21:07.463 ************************************ 00:21:07.463 06:51:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:07.722 * Looking for test storage... 00:21:07.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:07.722 06:51:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:07.722 06:51:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:07.722 06:51:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:07.722 06:51:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:07.722 06:51:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:07.722 06:51:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:07.722 06:51:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:07.722 06:51:21 -- scripts/common.sh@335 -- # IFS=.-: 00:21:07.722 06:51:21 -- scripts/common.sh@335 -- # read -ra ver1 00:21:07.722 06:51:21 -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.722 06:51:21 -- scripts/common.sh@336 -- # read -ra ver2 00:21:07.722 06:51:21 -- scripts/common.sh@337 -- # local 'op=<' 00:21:07.722 06:51:21 -- scripts/common.sh@339 -- # ver1_l=2 00:21:07.722 06:51:21 -- scripts/common.sh@340 -- # ver2_l=1 00:21:07.722 06:51:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:07.722 06:51:21 -- scripts/common.sh@343 -- # case "$op" in 00:21:07.722 06:51:21 -- scripts/common.sh@344 -- # : 1 00:21:07.722 06:51:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:07.722 06:51:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.722 06:51:21 -- scripts/common.sh@364 -- # decimal 1 00:21:07.722 06:51:21 -- scripts/common.sh@352 -- # local d=1 00:21:07.722 06:51:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.722 06:51:21 -- scripts/common.sh@354 -- # echo 1 00:21:07.722 06:51:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:07.722 06:51:21 -- scripts/common.sh@365 -- # decimal 2 00:21:07.723 06:51:21 -- scripts/common.sh@352 -- # local d=2 00:21:07.723 06:51:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.723 06:51:21 -- scripts/common.sh@354 -- # echo 2 00:21:07.723 06:51:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:07.723 06:51:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:07.723 06:51:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:07.723 06:51:21 -- scripts/common.sh@367 -- # return 0 00:21:07.723 06:51:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.723 06:51:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.723 --rc genhtml_branch_coverage=1 00:21:07.723 --rc genhtml_function_coverage=1 00:21:07.723 --rc genhtml_legend=1 00:21:07.723 --rc geninfo_all_blocks=1 00:21:07.723 --rc geninfo_unexecuted_blocks=1 00:21:07.723 00:21:07.723 ' 00:21:07.723 06:51:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.723 --rc genhtml_branch_coverage=1 00:21:07.723 --rc genhtml_function_coverage=1 00:21:07.723 --rc genhtml_legend=1 00:21:07.723 --rc geninfo_all_blocks=1 00:21:07.723 --rc geninfo_unexecuted_blocks=1 00:21:07.723 00:21:07.723 ' 00:21:07.723 06:51:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.723 --rc genhtml_branch_coverage=1 00:21:07.723 --rc genhtml_function_coverage=1 00:21:07.723 --rc genhtml_legend=1 00:21:07.723 --rc geninfo_all_blocks=1 00:21:07.723 --rc geninfo_unexecuted_blocks=1 00:21:07.723 00:21:07.723 ' 00:21:07.723 06:51:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.723 --rc genhtml_branch_coverage=1 00:21:07.723 --rc genhtml_function_coverage=1 00:21:07.723 --rc genhtml_legend=1 00:21:07.723 --rc geninfo_all_blocks=1 00:21:07.723 --rc geninfo_unexecuted_blocks=1 00:21:07.723 00:21:07.723 ' 00:21:07.723 06:51:21 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:07.723 06:51:21 -- nvmf/common.sh@7 -- # uname -s 00:21:07.723 06:51:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.723 06:51:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.723 06:51:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.723 06:51:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.723 06:51:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.723 06:51:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.723 06:51:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.723 06:51:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.723 06:51:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.723 06:51:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:07.723 06:51:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:07.723 06:51:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.723 06:51:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.723 06:51:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:07.723 06:51:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:07.723 06:51:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.723 06:51:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.723 06:51:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.723 06:51:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.723 06:51:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.723 06:51:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.723 06:51:21 -- paths/export.sh@5 -- # export PATH 00:21:07.723 06:51:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.723 06:51:21 -- nvmf/common.sh@46 -- # : 0 00:21:07.723 06:51:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:07.723 06:51:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:07.723 06:51:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:07.723 06:51:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.723 06:51:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.723 06:51:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:07.723 06:51:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:07.723 06:51:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:07.723 06:51:21 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:07.723 06:51:21 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:07.723 06:51:21 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.723 06:51:21 -- host/perf.sh@17 -- # nvmftestinit 00:21:07.723 06:51:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:07.723 06:51:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.723 06:51:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:07.723 06:51:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:07.723 06:51:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:07.723 06:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.723 06:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.723 06:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.723 06:51:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:07.723 06:51:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:07.723 06:51:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.723 06:51:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.723 06:51:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:07.723 06:51:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:07.723 06:51:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:07.723 06:51:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:07.723 06:51:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:07.723 06:51:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.723 06:51:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:07.723 06:51:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:07.723 06:51:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:07.723 06:51:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:07.723 06:51:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:07.723 06:51:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:07.723 Cannot find device "nvmf_tgt_br" 00:21:07.723 06:51:21 -- nvmf/common.sh@154 -- # true 00:21:07.723 06:51:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.723 Cannot find device "nvmf_tgt_br2" 00:21:07.723 06:51:21 -- nvmf/common.sh@155 -- # true 00:21:07.723 06:51:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:07.723 06:51:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:07.723 Cannot find device "nvmf_tgt_br" 00:21:07.723 06:51:21 -- nvmf/common.sh@157 -- # true 00:21:07.723 06:51:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:07.723 Cannot find device "nvmf_tgt_br2" 00:21:07.723 06:51:21 -- nvmf/common.sh@158 -- # true 00:21:07.723 06:51:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:07.982 06:51:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:07.982 06:51:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.982 06:51:21 -- nvmf/common.sh@161 -- # true 00:21:07.982 06:51:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:07.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.982 06:51:21 -- nvmf/common.sh@162 -- # true 00:21:07.982 06:51:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:07.982 06:51:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:07.982 06:51:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:07.982 06:51:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:07.982 06:51:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:07.982 06:51:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:07.982 06:51:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:07.982 06:51:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:07.982 06:51:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:07.982 06:51:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:07.982 06:51:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:07.982 06:51:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:07.982 06:51:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:07.982 06:51:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:07.982 06:51:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:07.982 06:51:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:07.982 06:51:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:07.982 06:51:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:07.982 06:51:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:07.982 06:51:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.982 06:51:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.241 06:51:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.241 06:51:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.241 06:51:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:08.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:21:08.241 00:21:08.241 --- 10.0.0.2 ping statistics --- 00:21:08.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.241 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:08.241 06:51:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:08.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:08.241 00:21:08.241 --- 10.0.0.3 ping statistics --- 00:21:08.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.241 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:08.241 06:51:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:08.241 00:21:08.241 --- 10.0.0.1 ping statistics --- 00:21:08.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.241 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:08.241 06:51:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.241 06:51:21 -- nvmf/common.sh@421 -- # return 0 00:21:08.241 06:51:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:08.241 06:51:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.241 06:51:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:08.241 06:51:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:08.241 06:51:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.241 06:51:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:08.241 06:51:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:08.241 06:51:22 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:08.241 06:51:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:08.241 06:51:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.242 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:21:08.242 06:51:22 -- nvmf/common.sh@469 -- # nvmfpid=83250 00:21:08.242 06:51:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.242 06:51:22 -- nvmf/common.sh@470 -- # waitforlisten 83250 00:21:08.242 06:51:22 -- common/autotest_common.sh@829 -- # '[' -z 83250 ']' 00:21:08.242 06:51:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.242 06:51:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.242 06:51:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.242 06:51:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.242 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:21:08.242 [2024-12-14 06:51:22.080552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:08.242 [2024-12-14 06:51:22.080631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.242 [2024-12-14 06:51:22.215173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.500 [2024-12-14 06:51:22.318658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:08.500 [2024-12-14 06:51:22.318800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.500 [2024-12-14 06:51:22.318814] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.500 [2024-12-14 06:51:22.318822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.500 [2024-12-14 06:51:22.319033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.501 [2024-12-14 06:51:22.319522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.501 [2024-12-14 06:51:22.319650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.501 [2024-12-14 06:51:22.319730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.437 06:51:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.437 06:51:23 -- common/autotest_common.sh@862 -- # return 0 00:21:09.437 06:51:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:09.437 06:51:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.437 06:51:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.437 06:51:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.437 06:51:23 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:09.437 06:51:23 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:09.696 06:51:23 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:09.696 06:51:23 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:09.955 06:51:23 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:21:09.955 06:51:23 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:10.523 06:51:24 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:10.523 06:51:24 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:21:10.523 06:51:24 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:10.523 06:51:24 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:10.523 06:51:24 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.781 [2024-12-14 06:51:24.531602] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.781 06:51:24 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.040 06:51:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:11.040 06:51:24 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.040 06:51:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:11.040 06:51:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:11.299 06:51:25 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.559 [2024-12-14 06:51:25.505628] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.559 06:51:25 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:11.818 06:51:25 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:21:11.818 06:51:25 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:21:11.818 06:51:25 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:11.818 06:51:25 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:21:13.196 Initializing NVMe Controllers 00:21:13.196 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:21:13.196 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:21:13.196 Initialization complete. Launching workers. 00:21:13.196 ======================================================== 00:21:13.196 Latency(us) 00:21:13.196 Device Information : IOPS MiB/s Average min max 00:21:13.196 PCIE (0000:00:06.0) NSID 1 from core 0: 21783.55 85.09 1468.35 322.97 8187.00 00:21:13.196 ======================================================== 00:21:13.196 Total : 21783.55 85.09 1468.35 322.97 8187.00 00:21:13.196 00:21:13.196 06:51:26 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.591 Initializing NVMe Controllers 00:21:14.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:14.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:14.591 Initialization complete. Launching workers. 00:21:14.591 ======================================================== 00:21:14.591 Latency(us) 00:21:14.591 Device Information : IOPS MiB/s Average min max 00:21:14.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2861.87 11.18 349.09 113.46 7177.02 00:21:14.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.52 0.48 8095.39 6950.51 14970.78 00:21:14.591 ======================================================== 00:21:14.591 Total : 2985.39 11.66 669.60 113.46 14970.78 00:21:14.591 00:21:14.591 06:51:28 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:15.968 Initializing NVMe Controllers 00:21:15.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:15.968 Initialization complete. Launching workers. 00:21:15.968 ======================================================== 00:21:15.968 Latency(us) 00:21:15.968 Device Information : IOPS MiB/s Average min max 00:21:15.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8842.89 34.54 3623.26 725.90 7473.92 00:21:15.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2686.97 10.50 12028.69 6619.72 22823.28 00:21:15.968 ======================================================== 00:21:15.968 Total : 11529.86 45.04 5582.10 725.90 22823.28 00:21:15.968 00:21:15.968 06:51:29 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:15.968 06:51:29 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.502 Initializing NVMe Controllers 00:21:18.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.502 Controller IO queue size 128, less than required. 00:21:18.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.502 Controller IO queue size 128, less than required. 00:21:18.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:18.502 Initialization complete. Launching workers. 00:21:18.502 ======================================================== 00:21:18.502 Latency(us) 00:21:18.502 Device Information : IOPS MiB/s Average min max 00:21:18.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1587.38 396.84 81784.13 57842.28 148288.30 00:21:18.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 523.81 130.95 252909.93 94187.94 362610.65 00:21:18.502 ======================================================== 00:21:18.502 Total : 2111.19 527.80 124242.70 57842.28 362610.65 00:21:18.502 00:21:18.502 06:51:32 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:18.502 No valid NVMe controllers or AIO or URING devices found 00:21:18.502 Initializing NVMe Controllers 00:21:18.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.502 Controller IO queue size 128, less than required. 00:21:18.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:18.502 Controller IO queue size 128, less than required. 00:21:18.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:18.502 WARNING: Some requested NVMe devices were skipped 00:21:18.502 06:51:32 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:21.038 Initializing NVMe Controllers 00:21:21.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.038 Controller IO queue size 128, less than required. 00:21:21.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.038 Controller IO queue size 128, less than required. 00:21:21.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:21.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:21.038 Initialization complete. Launching workers. 00:21:21.038 00:21:21.038 ==================== 00:21:21.038 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:21.038 TCP transport: 00:21:21.038 polls: 7587 00:21:21.038 idle_polls: 5057 00:21:21.038 sock_completions: 2530 00:21:21.038 nvme_completions: 5084 00:21:21.038 submitted_requests: 7831 00:21:21.038 queued_requests: 1 00:21:21.038 00:21:21.038 ==================== 00:21:21.038 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:21.038 TCP transport: 00:21:21.038 polls: 10379 00:21:21.038 idle_polls: 7617 00:21:21.038 sock_completions: 2762 00:21:21.038 nvme_completions: 5483 00:21:21.038 submitted_requests: 8348 00:21:21.038 queued_requests: 1 00:21:21.038 ======================================================== 00:21:21.038 Latency(us) 00:21:21.038 Device Information : IOPS MiB/s Average min max 00:21:21.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1332.25 333.06 99029.08 50419.33 178036.21 00:21:21.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1431.58 357.90 90404.92 52764.78 142434.32 00:21:21.038 ======================================================== 00:21:21.038 Total : 2763.83 690.96 94562.03 50419.33 178036.21 00:21:21.038 00:21:21.038 06:51:34 -- host/perf.sh@66 -- # sync 00:21:21.038 06:51:34 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.295 06:51:35 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:21.295 06:51:35 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:21:21.553 06:51:35 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:21.811 06:51:35 -- host/perf.sh@72 -- # ls_guid=4fb8c0bf-66cc-4a26-873e-07e6b61f329b 00:21:21.811 06:51:35 -- host/perf.sh@73 -- # get_lvs_free_mb 4fb8c0bf-66cc-4a26-873e-07e6b61f329b 00:21:21.811 06:51:35 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4fb8c0bf-66cc-4a26-873e-07e6b61f329b 00:21:21.811 06:51:35 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:21.811 06:51:35 -- common/autotest_common.sh@1355 -- # local fc 00:21:21.811 06:51:35 -- common/autotest_common.sh@1356 -- # local cs 00:21:21.811 06:51:35 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:21.811 06:51:35 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:21.811 { 00:21:21.811 "base_bdev": "Nvme0n1", 00:21:21.811 "block_size": 4096, 00:21:21.811 "cluster_size": 4194304, 00:21:21.811 "free_clusters": 1278, 00:21:21.811 "name": "lvs_0", 00:21:21.811 "total_data_clusters": 1278, 00:21:21.811 "uuid": "4fb8c0bf-66cc-4a26-873e-07e6b61f329b" 00:21:21.811 } 00:21:21.811 ]' 00:21:21.811 06:51:35 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4fb8c0bf-66cc-4a26-873e-07e6b61f329b") .free_clusters' 00:21:22.070 06:51:35 -- common/autotest_common.sh@1358 -- # fc=1278 00:21:22.070 06:51:35 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4fb8c0bf-66cc-4a26-873e-07e6b61f329b") .cluster_size' 00:21:22.070 5112 00:21:22.070 06:51:35 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:22.070 06:51:35 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:21:22.070 06:51:35 -- common/autotest_common.sh@1363 -- # echo 5112 00:21:22.070 06:51:35 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:22.070 06:51:35 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4fb8c0bf-66cc-4a26-873e-07e6b61f329b lbd_0 5112 00:21:22.329 06:51:36 -- host/perf.sh@80 -- # lb_guid=2edef6b9-4a5b-4d3c-bc03-792b1889c166 00:21:22.329 06:51:36 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2edef6b9-4a5b-4d3c-bc03-792b1889c166 lvs_n_0 00:21:22.588 06:51:36 -- host/perf.sh@83 -- # ls_nested_guid=d04ed82f-3fd9-4f80-a825-fe17b3035a69 00:21:22.588 06:51:36 -- host/perf.sh@84 -- # get_lvs_free_mb d04ed82f-3fd9-4f80-a825-fe17b3035a69 00:21:22.588 06:51:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d04ed82f-3fd9-4f80-a825-fe17b3035a69 00:21:22.588 06:51:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:22.588 06:51:36 -- common/autotest_common.sh@1355 -- # local fc 00:21:22.588 06:51:36 -- common/autotest_common.sh@1356 -- # local cs 00:21:22.588 06:51:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:22.861 06:51:36 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:22.861 { 00:21:22.861 "base_bdev": "Nvme0n1", 00:21:22.861 "block_size": 4096, 00:21:22.861 "cluster_size": 4194304, 00:21:22.861 "free_clusters": 0, 00:21:22.861 "name": "lvs_0", 00:21:22.861 "total_data_clusters": 1278, 00:21:22.861 "uuid": "4fb8c0bf-66cc-4a26-873e-07e6b61f329b" 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "base_bdev": "2edef6b9-4a5b-4d3c-bc03-792b1889c166", 00:21:22.862 "block_size": 4096, 00:21:22.862 "cluster_size": 4194304, 00:21:22.862 "free_clusters": 1276, 00:21:22.862 "name": "lvs_n_0", 00:21:22.862 "total_data_clusters": 1276, 00:21:22.862 "uuid": "d04ed82f-3fd9-4f80-a825-fe17b3035a69" 00:21:22.862 } 00:21:22.862 ]' 00:21:22.862 06:51:36 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d04ed82f-3fd9-4f80-a825-fe17b3035a69") .free_clusters' 00:21:23.148 06:51:36 -- common/autotest_common.sh@1358 -- # fc=1276 00:21:23.148 06:51:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d04ed82f-3fd9-4f80-a825-fe17b3035a69") .cluster_size' 00:21:23.148 5104 00:21:23.148 06:51:36 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:23.148 06:51:36 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:21:23.148 06:51:36 -- common/autotest_common.sh@1363 -- # echo 5104 00:21:23.148 06:51:36 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:23.148 06:51:36 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d04ed82f-3fd9-4f80-a825-fe17b3035a69 lbd_nest_0 5104 00:21:23.407 06:51:37 -- host/perf.sh@88 -- # lb_nested_guid=000159ab-c37c-4b84-b2c0-7630816e8bcf 00:21:23.407 06:51:37 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.666 06:51:37 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:23.666 06:51:37 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 000159ab-c37c-4b84-b2c0-7630816e8bcf 00:21:23.925 06:51:37 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.184 06:51:37 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:24.184 06:51:37 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:24.184 06:51:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:24.184 06:51:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:24.184 06:51:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:24.442 No valid NVMe controllers or AIO or URING devices found 00:21:24.442 Initializing NVMe Controllers 00:21:24.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.442 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:24.442 WARNING: Some requested NVMe devices were skipped 00:21:24.442 06:51:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:24.442 06:51:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.651 Initializing NVMe Controllers 00:21:36.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:36.651 Initialization complete. Launching workers. 00:21:36.651 ======================================================== 00:21:36.651 Latency(us) 00:21:36.651 Device Information : IOPS MiB/s Average min max 00:21:36.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 739.90 92.49 1351.06 402.38 7810.73 00:21:36.651 ======================================================== 00:21:36.651 Total : 739.90 92.49 1351.06 402.38 7810.73 00:21:36.651 00:21:36.651 06:51:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:36.651 06:51:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:36.651 06:51:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.651 No valid NVMe controllers or AIO or URING devices found 00:21:36.651 Initializing NVMe Controllers 00:21:36.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.651 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:36.651 WARNING: Some requested NVMe devices were skipped 00:21:36.651 06:51:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:36.651 06:51:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.634 Initializing NVMe Controllers 00:21:46.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:46.634 Initialization complete. Launching workers. 00:21:46.634 ======================================================== 00:21:46.634 Latency(us) 00:21:46.634 Device Information : IOPS MiB/s Average min max 00:21:46.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 929.50 116.19 34472.25 7808.27 288835.01 00:21:46.634 ======================================================== 00:21:46.634 Total : 929.50 116.19 34472.25 7808.27 288835.01 00:21:46.634 00:21:46.634 06:51:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:46.634 06:51:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:46.634 06:51:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.634 No valid NVMe controllers or AIO or URING devices found 00:21:46.634 Initializing NVMe Controllers 00:21:46.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.634 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:46.634 WARNING: Some requested NVMe devices were skipped 00:21:46.634 06:51:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:46.634 06:51:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:56.615 Initializing NVMe Controllers 00:21:56.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.615 Controller IO queue size 128, less than required. 00:21:56.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.615 Initialization complete. Launching workers. 00:21:56.616 ======================================================== 00:21:56.616 Latency(us) 00:21:56.616 Device Information : IOPS MiB/s Average min max 00:21:56.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3908.60 488.57 32755.51 12072.74 65620.67 00:21:56.616 ======================================================== 00:21:56.616 Total : 3908.60 488.57 32755.51 12072.74 65620.67 00:21:56.616 00:21:56.616 06:52:09 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.616 06:52:09 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 000159ab-c37c-4b84-b2c0-7630816e8bcf 00:21:56.616 06:52:10 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:56.875 06:52:10 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2edef6b9-4a5b-4d3c-bc03-792b1889c166 00:21:57.134 06:52:10 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:57.393 06:52:11 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:57.393 06:52:11 -- host/perf.sh@114 -- # nvmftestfini 00:21:57.393 06:52:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:57.393 06:52:11 -- nvmf/common.sh@116 -- # sync 00:21:57.393 06:52:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:57.393 06:52:11 -- nvmf/common.sh@119 -- # set +e 00:21:57.393 06:52:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:57.393 06:52:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:57.393 rmmod nvme_tcp 00:21:57.393 rmmod nvme_fabrics 00:21:57.393 rmmod nvme_keyring 00:21:57.393 06:52:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:57.393 06:52:11 -- nvmf/common.sh@123 -- # set -e 00:21:57.393 06:52:11 -- nvmf/common.sh@124 -- # return 0 00:21:57.393 06:52:11 -- nvmf/common.sh@477 -- # '[' -n 83250 ']' 00:21:57.393 06:52:11 -- nvmf/common.sh@478 -- # killprocess 83250 00:21:57.393 06:52:11 -- common/autotest_common.sh@936 -- # '[' -z 83250 ']' 00:21:57.393 06:52:11 -- common/autotest_common.sh@940 -- # kill -0 83250 00:21:57.393 06:52:11 -- common/autotest_common.sh@941 -- # uname 00:21:57.393 06:52:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.393 06:52:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83250 00:21:57.393 killing process with pid 83250 00:21:57.393 06:52:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:57.393 06:52:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:57.393 06:52:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83250' 00:21:57.393 06:52:11 -- common/autotest_common.sh@955 -- # kill 83250 00:21:57.393 06:52:11 -- common/autotest_common.sh@960 -- # wait 83250 00:21:59.322 06:52:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:59.322 06:52:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:59.322 06:52:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:59.322 06:52:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.322 06:52:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:59.322 06:52:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.322 06:52:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.322 06:52:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.322 06:52:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:59.322 ************************************ 00:21:59.322 END TEST nvmf_perf 00:21:59.322 ************************************ 00:21:59.322 00:21:59.322 real 0m51.529s 00:21:59.322 user 3m13.526s 00:21:59.322 sys 0m10.688s 00:21:59.322 06:52:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:59.322 06:52:12 -- common/autotest_common.sh@10 -- # set +x 00:21:59.322 06:52:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:59.322 06:52:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:59.322 06:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:59.322 06:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:59.322 ************************************ 00:21:59.322 START TEST nvmf_fio_host 00:21:59.322 ************************************ 00:21:59.322 06:52:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:59.322 * Looking for test storage... 00:21:59.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.322 06:52:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:59.322 06:52:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:59.322 06:52:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:59.322 06:52:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:59.322 06:52:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:59.322 06:52:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:59.322 06:52:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:59.322 06:52:13 -- scripts/common.sh@335 -- # IFS=.-: 00:21:59.322 06:52:13 -- scripts/common.sh@335 -- # read -ra ver1 00:21:59.322 06:52:13 -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.322 06:52:13 -- scripts/common.sh@336 -- # read -ra ver2 00:21:59.322 06:52:13 -- scripts/common.sh@337 -- # local 'op=<' 00:21:59.322 06:52:13 -- scripts/common.sh@339 -- # ver1_l=2 00:21:59.322 06:52:13 -- scripts/common.sh@340 -- # ver2_l=1 00:21:59.322 06:52:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:59.323 06:52:13 -- scripts/common.sh@343 -- # case "$op" in 00:21:59.323 06:52:13 -- scripts/common.sh@344 -- # : 1 00:21:59.323 06:52:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:59.323 06:52:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.323 06:52:13 -- scripts/common.sh@364 -- # decimal 1 00:21:59.323 06:52:13 -- scripts/common.sh@352 -- # local d=1 00:21:59.323 06:52:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.323 06:52:13 -- scripts/common.sh@354 -- # echo 1 00:21:59.323 06:52:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:59.323 06:52:13 -- scripts/common.sh@365 -- # decimal 2 00:21:59.323 06:52:13 -- scripts/common.sh@352 -- # local d=2 00:21:59.323 06:52:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.323 06:52:13 -- scripts/common.sh@354 -- # echo 2 00:21:59.323 06:52:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:59.323 06:52:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:59.323 06:52:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:59.323 06:52:13 -- scripts/common.sh@367 -- # return 0 00:21:59.323 06:52:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.323 06:52:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:59.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.323 --rc genhtml_branch_coverage=1 00:21:59.323 --rc genhtml_function_coverage=1 00:21:59.323 --rc genhtml_legend=1 00:21:59.323 --rc geninfo_all_blocks=1 00:21:59.323 --rc geninfo_unexecuted_blocks=1 00:21:59.323 00:21:59.323 ' 00:21:59.323 06:52:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:59.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.323 --rc genhtml_branch_coverage=1 00:21:59.323 --rc genhtml_function_coverage=1 00:21:59.323 --rc genhtml_legend=1 00:21:59.323 --rc geninfo_all_blocks=1 00:21:59.323 --rc geninfo_unexecuted_blocks=1 00:21:59.323 00:21:59.323 ' 00:21:59.323 06:52:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:59.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.323 --rc genhtml_branch_coverage=1 00:21:59.323 --rc genhtml_function_coverage=1 00:21:59.323 --rc genhtml_legend=1 00:21:59.323 --rc geninfo_all_blocks=1 00:21:59.323 --rc geninfo_unexecuted_blocks=1 00:21:59.323 00:21:59.323 ' 00:21:59.323 06:52:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:59.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.323 --rc genhtml_branch_coverage=1 00:21:59.323 --rc genhtml_function_coverage=1 00:21:59.323 --rc genhtml_legend=1 00:21:59.323 --rc geninfo_all_blocks=1 00:21:59.323 --rc geninfo_unexecuted_blocks=1 00:21:59.323 00:21:59.323 ' 00:21:59.323 06:52:13 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.323 06:52:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.323 06:52:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.323 06:52:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.323 06:52:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@5 -- # export PATH 00:21:59.323 06:52:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.323 06:52:13 -- nvmf/common.sh@7 -- # uname -s 00:21:59.323 06:52:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.323 06:52:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.323 06:52:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.323 06:52:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.323 06:52:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.323 06:52:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.323 06:52:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.323 06:52:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.323 06:52:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.323 06:52:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.323 06:52:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:59.323 06:52:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:21:59.323 06:52:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.323 06:52:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.323 06:52:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.323 06:52:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.323 06:52:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.323 06:52:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.323 06:52:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.323 06:52:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- paths/export.sh@5 -- # export PATH 00:21:59.323 06:52:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.323 06:52:13 -- nvmf/common.sh@46 -- # : 0 00:21:59.323 06:52:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:59.323 06:52:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:59.323 06:52:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:59.323 06:52:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.323 06:52:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.323 06:52:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:59.323 06:52:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:59.323 06:52:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.323 06:52:13 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.323 06:52:13 -- host/fio.sh@14 -- # nvmftestinit 00:21:59.323 06:52:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.323 06:52:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.323 06:52:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.323 06:52:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.323 06:52:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.324 06:52:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.324 06:52:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.324 06:52:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.324 06:52:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.324 06:52:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.324 06:52:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.324 06:52:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.324 06:52:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.324 06:52:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.324 06:52:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.324 06:52:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.324 06:52:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.324 06:52:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.324 06:52:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.324 06:52:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.324 06:52:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.324 06:52:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.324 06:52:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.324 06:52:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.324 06:52:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.324 06:52:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.324 06:52:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.324 06:52:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.324 Cannot find device "nvmf_tgt_br" 00:21:59.324 06:52:13 -- nvmf/common.sh@154 -- # true 00:21:59.324 06:52:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.324 Cannot find device "nvmf_tgt_br2" 00:21:59.324 06:52:13 -- nvmf/common.sh@155 -- # true 00:21:59.324 06:52:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.324 06:52:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.324 Cannot find device "nvmf_tgt_br" 00:21:59.324 06:52:13 -- nvmf/common.sh@157 -- # true 00:21:59.324 06:52:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.324 Cannot find device "nvmf_tgt_br2" 00:21:59.324 06:52:13 -- nvmf/common.sh@158 -- # true 00:21:59.324 06:52:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.583 06:52:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.583 06:52:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.583 06:52:13 -- nvmf/common.sh@161 -- # true 00:21:59.583 06:52:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.583 06:52:13 -- nvmf/common.sh@162 -- # true 00:21:59.583 06:52:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.583 06:52:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.583 06:52:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.583 06:52:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.583 06:52:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.583 06:52:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.583 06:52:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.583 06:52:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.583 06:52:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.583 06:52:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.583 06:52:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.583 06:52:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.583 06:52:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.583 06:52:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.583 06:52:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.583 06:52:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.583 06:52:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.583 06:52:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.583 06:52:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.583 06:52:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.583 06:52:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.583 06:52:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.583 06:52:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.583 06:52:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:21:59.583 00:21:59.583 --- 10.0.0.2 ping statistics --- 00:21:59.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.583 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:59.583 06:52:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:59.584 00:21:59.584 --- 10.0.0.3 ping statistics --- 00:21:59.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.584 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:59.584 06:52:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:59.584 00:21:59.584 --- 10.0.0.1 ping statistics --- 00:21:59.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.584 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:59.584 06:52:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.584 06:52:13 -- nvmf/common.sh@421 -- # return 0 00:21:59.584 06:52:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.584 06:52:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.584 06:52:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.584 06:52:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.584 06:52:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.584 06:52:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.584 06:52:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.843 06:52:13 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:59.843 06:52:13 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:59.843 06:52:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.843 06:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:59.843 06:52:13 -- host/fio.sh@24 -- # nvmfpid=84232 00:21:59.843 06:52:13 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.843 06:52:13 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.843 06:52:13 -- host/fio.sh@28 -- # waitforlisten 84232 00:21:59.843 06:52:13 -- common/autotest_common.sh@829 -- # '[' -z 84232 ']' 00:21:59.843 06:52:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.843 06:52:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.843 06:52:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.843 06:52:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.843 06:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:59.843 [2024-12-14 06:52:13.651520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.843 [2024-12-14 06:52:13.651856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.843 [2024-12-14 06:52:13.785779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.103 [2024-12-14 06:52:13.892652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.103 [2024-12-14 06:52:13.892804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.103 [2024-12-14 06:52:13.892818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.103 [2024-12-14 06:52:13.892826] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.103 [2024-12-14 06:52:13.892970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.103 [2024-12-14 06:52:13.893755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.103 [2024-12-14 06:52:13.893880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.103 [2024-12-14 06:52:13.893883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.671 06:52:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.671 06:52:14 -- common/autotest_common.sh@862 -- # return 0 00:22:00.671 06:52:14 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.930 [2024-12-14 06:52:14.868141] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.930 06:52:14 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:00.930 06:52:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.930 06:52:14 -- common/autotest_common.sh@10 -- # set +x 00:22:01.190 06:52:14 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:01.449 Malloc1 00:22:01.449 06:52:15 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.707 06:52:15 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:01.966 06:52:15 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.225 [2024-12-14 06:52:15.985593] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.225 06:52:16 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:02.482 06:52:16 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:02.482 06:52:16 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:02.482 06:52:16 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:02.482 06:52:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:22:02.482 06:52:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:02.482 06:52:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:22:02.482 06:52:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:02.482 06:52:16 -- common/autotest_common.sh@1330 -- # shift 00:22:02.483 06:52:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:22:02.483 06:52:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:02.483 06:52:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:02.483 06:52:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:02.483 06:52:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:02.483 06:52:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:02.483 06:52:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:02.483 06:52:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:02.483 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:02.483 fio-3.35 00:22:02.483 Starting 1 thread 00:22:05.018 00:22:05.018 test: (groupid=0, jobs=1): err= 0: pid=84359: Sat Dec 14 06:52:18 2024 00:22:05.018 read: IOPS=9090, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec) 00:22:05.018 slat (nsec): min=1850, max=347822, avg=2391.72, stdev=3621.71 00:22:05.018 clat (usec): min=3309, max=15049, avg=7486.89, stdev=1488.05 00:22:05.018 lat (usec): min=3369, max=15051, avg=7489.28, stdev=1487.97 00:22:05.018 clat percentiles (usec): 00:22:05.018 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6194], 00:22:05.018 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7635], 00:22:05.018 | 70.00th=[ 8225], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10290], 00:22:05.018 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12911], 99.95th=[13304], 00:22:05.018 | 99.99th=[15008] 00:22:05.018 bw ( KiB/s): min=31744, max=42000, per=99.94%, avg=36340.00, stdev=5053.97, samples=4 00:22:05.018 iops : min= 7936, max=10500, avg=9085.00, stdev=1263.49, samples=4 00:22:05.018 write: IOPS=9104, BW=35.6MiB/s (37.3MB/s)(71.3MiB/2006msec); 0 zone resets 00:22:05.018 slat (nsec): min=1911, max=313589, avg=2463.20, stdev=2679.27 00:22:05.018 clat (usec): min=2533, max=12889, avg=6548.21, stdev=1286.75 00:22:05.018 lat (usec): min=2547, max=12891, avg=6550.68, stdev=1286.67 00:22:05.018 clat percentiles (usec): 00:22:05.018 | 1.00th=[ 4555], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5473], 00:22:05.018 | 30.00th=[ 5669], 40.00th=[ 5866], 50.00th=[ 6128], 60.00th=[ 6652], 00:22:05.018 | 70.00th=[ 7177], 80.00th=[ 7701], 90.00th=[ 8455], 95.00th=[ 8979], 00:22:05.018 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[11863], 99.95th=[12256], 00:22:05.018 | 99.99th=[12911] 00:22:05.018 bw ( KiB/s): min=31912, max=42480, per=99.96%, avg=36402.00, stdev=5048.41, samples=4 00:22:05.018 iops : min= 7978, max=10620, avg=9100.50, stdev=1262.10, samples=4 00:22:05.018 lat (msec) : 4=0.13%, 10=96.04%, 20=3.83% 00:22:05.018 cpu : usr=67.28%, sys=24.04%, ctx=48, majf=0, minf=5 00:22:05.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:05.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:05.018 issued rwts: total=18235,18263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:05.018 00:22:05.018 Run status group 0 (all jobs): 00:22:05.018 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2006-2006msec 00:22:05.018 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.3MiB (74.8MB), run=2006-2006msec 00:22:05.018 06:52:18 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.018 06:52:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.018 06:52:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:22:05.018 06:52:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.018 06:52:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:22:05.018 06:52:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.018 06:52:18 -- common/autotest_common.sh@1330 -- # shift 00:22:05.018 06:52:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:22:05.018 06:52:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:05.018 06:52:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:05.018 06:52:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:05.018 06:52:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:05.018 06:52:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:05.018 06:52:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:05.018 06:52:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.018 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:05.018 fio-3.35 00:22:05.018 Starting 1 thread 00:22:07.555 00:22:07.555 test: (groupid=0, jobs=1): err= 0: pid=84408: Sat Dec 14 06:52:21 2024 00:22:07.555 read: IOPS=8261, BW=129MiB/s (135MB/s)(259MiB/2004msec) 00:22:07.555 slat (usec): min=2, max=102, avg= 3.79, stdev= 2.62 00:22:07.555 clat (usec): min=2826, max=18909, avg=9152.18, stdev=2282.30 00:22:07.555 lat (usec): min=2829, max=18912, avg=9155.97, stdev=2282.35 00:22:07.555 clat percentiles (usec): 00:22:07.555 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:22:07.555 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:22:07.555 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11994], 95.00th=[13304], 00:22:07.555 | 99.00th=[15926], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:22:07.555 | 99.99th=[18220] 00:22:07.555 bw ( KiB/s): min=61760, max=75776, per=51.04%, avg=67472.00, stdev=6092.76, samples=4 00:22:07.555 iops : min= 3860, max= 4736, avg=4217.00, stdev=380.80, samples=4 00:22:07.555 write: IOPS=4779, BW=74.7MiB/s (78.3MB/s)(138MiB/1848msec); 0 zone resets 00:22:07.555 slat (usec): min=30, max=267, avg=35.09, stdev= 7.91 00:22:07.555 clat (usec): min=3035, max=19771, avg=11172.50, stdev=2002.27 00:22:07.555 lat (usec): min=3067, max=19839, avg=11207.59, stdev=2002.52 00:22:07.555 clat percentiles (usec): 00:22:07.555 | 1.00th=[ 7373], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:22:07.555 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:22:07.555 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13829], 95.00th=[15008], 00:22:07.555 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19268], 99.95th=[19268], 00:22:07.555 | 99.99th=[19792] 00:22:07.555 bw ( KiB/s): min=63520, max=78304, per=91.76%, avg=70168.00, stdev=6376.19, samples=4 00:22:07.555 iops : min= 3970, max= 4894, avg=4385.50, stdev=398.51, samples=4 00:22:07.555 lat (msec) : 4=0.25%, 10=52.94%, 20=46.81% 00:22:07.555 cpu : usr=71.16%, sys=18.76%, ctx=13, majf=0, minf=1 00:22:07.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:07.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:07.555 issued rwts: total=16556,8832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:07.555 00:22:07.555 Run status group 0 (all jobs): 00:22:07.555 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2004-2004msec 00:22:07.555 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=138MiB (145MB), run=1848-1848msec 00:22:07.555 06:52:21 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.555 06:52:21 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:22:07.555 06:52:21 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:22:07.555 06:52:21 -- host/fio.sh@51 -- # get_nvme_bdfs 00:22:07.555 06:52:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:22:07.555 06:52:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:22:07.555 06:52:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:07.555 06:52:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:22:07.555 06:52:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:07.814 06:52:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:22:07.814 06:52:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:22:07.814 06:52:21 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:22:08.073 Nvme0n1 00:22:08.073 06:52:21 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:22:08.332 06:52:22 -- host/fio.sh@53 -- # ls_guid=a7619049-bc21-4415-a744-b34f096d8906 00:22:08.332 06:52:22 -- host/fio.sh@54 -- # get_lvs_free_mb a7619049-bc21-4415-a744-b34f096d8906 00:22:08.332 06:52:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=a7619049-bc21-4415-a744-b34f096d8906 00:22:08.332 06:52:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:22:08.332 06:52:22 -- common/autotest_common.sh@1355 -- # local fc 00:22:08.332 06:52:22 -- common/autotest_common.sh@1356 -- # local cs 00:22:08.332 06:52:22 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:08.591 06:52:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:22:08.591 { 00:22:08.591 "base_bdev": "Nvme0n1", 00:22:08.591 "block_size": 4096, 00:22:08.591 "cluster_size": 1073741824, 00:22:08.591 "free_clusters": 4, 00:22:08.591 "name": "lvs_0", 00:22:08.591 "total_data_clusters": 4, 00:22:08.591 "uuid": "a7619049-bc21-4415-a744-b34f096d8906" 00:22:08.591 } 00:22:08.591 ]' 00:22:08.591 06:52:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="a7619049-bc21-4415-a744-b34f096d8906") .free_clusters' 00:22:08.591 06:52:22 -- common/autotest_common.sh@1358 -- # fc=4 00:22:08.591 06:52:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="a7619049-bc21-4415-a744-b34f096d8906") .cluster_size' 00:22:08.591 06:52:22 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:22:08.591 06:52:22 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:22:08.591 06:52:22 -- common/autotest_common.sh@1363 -- # echo 4096 00:22:08.591 4096 00:22:08.591 06:52:22 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:08.871 2ca6fa78-4b59-43da-b4dc-4c31af5b42fa 00:22:08.871 06:52:22 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:09.130 06:52:23 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:09.388 06:52:23 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:09.647 06:52:23 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:09.647 06:52:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:09.647 06:52:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:22:09.647 06:52:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.647 06:52:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:22:09.647 06:52:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:09.647 06:52:23 -- common/autotest_common.sh@1330 -- # shift 00:22:09.647 06:52:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:22:09.647 06:52:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:09.647 06:52:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:09.647 06:52:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:09.647 06:52:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:09.647 06:52:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:09.647 06:52:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:09.647 06:52:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:09.906 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:09.906 fio-3.35 00:22:09.906 Starting 1 thread 00:22:12.441 00:22:12.441 test: (groupid=0, jobs=1): err= 0: pid=84566: Sat Dec 14 06:52:26 2024 00:22:12.441 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(51.9MiB/2007msec) 00:22:12.441 slat (nsec): min=1904, max=347239, avg=2498.70, stdev=3957.19 00:22:12.441 clat (usec): min=4220, max=18562, avg=10255.13, stdev=963.22 00:22:12.441 lat (usec): min=4229, max=18564, avg=10257.63, stdev=963.02 00:22:12.441 clat percentiles (usec): 00:22:12.441 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:22:12.441 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:22:12.441 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:22:12.441 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16057], 99.95th=[17695], 00:22:12.441 | 99.99th=[18482] 00:22:12.441 bw ( KiB/s): min=26128, max=26800, per=99.74%, avg=26422.75, stdev=337.92, samples=4 00:22:12.441 iops : min= 6532, max= 6700, avg=6605.50, stdev=84.69, samples=4 00:22:12.441 write: IOPS=6628, BW=25.9MiB/s (27.2MB/s)(52.0MiB/2007msec); 0 zone resets 00:22:12.441 slat (usec): min=2, max=297, avg= 2.60, stdev= 2.96 00:22:12.441 clat (usec): min=2415, max=14867, avg=8986.44, stdev=807.61 00:22:12.441 lat (usec): min=2427, max=14870, avg=8989.04, stdev=807.44 00:22:12.441 clat percentiles (usec): 00:22:12.441 | 1.00th=[ 7177], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:22:12.441 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:22:12.441 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 00:22:12.441 | 99.00th=[10683], 99.50th=[11076], 99.90th=[13566], 99.95th=[14615], 00:22:12.441 | 99.99th=[14877] 00:22:12.441 bw ( KiB/s): min=26048, max=27224, per=99.90%, avg=26488.75, stdev=525.30, samples=4 00:22:12.441 iops : min= 6512, max= 6806, avg=6622.00, stdev=131.32, samples=4 00:22:12.441 lat (msec) : 4=0.05%, 10=65.79%, 20=34.16% 00:22:12.441 cpu : usr=71.44%, sys=22.18%, ctx=40, majf=0, minf=5 00:22:12.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:12.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:12.441 issued rwts: total=13292,13304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:12.441 00:22:12.441 Run status group 0 (all jobs): 00:22:12.441 READ: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2007-2007msec 00:22:12.441 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=52.0MiB (54.5MB), run=2007-2007msec 00:22:12.441 06:52:26 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:12.441 06:52:26 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:12.700 06:52:26 -- host/fio.sh@64 -- # ls_nested_guid=704f814d-8ca3-444a-a0fe-0a5521296a5b 00:22:12.700 06:52:26 -- host/fio.sh@65 -- # get_lvs_free_mb 704f814d-8ca3-444a-a0fe-0a5521296a5b 00:22:12.700 06:52:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=704f814d-8ca3-444a-a0fe-0a5521296a5b 00:22:12.700 06:52:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:22:12.700 06:52:26 -- common/autotest_common.sh@1355 -- # local fc 00:22:12.700 06:52:26 -- common/autotest_common.sh@1356 -- # local cs 00:22:12.700 06:52:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:12.959 06:52:26 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:22:12.959 { 00:22:12.959 "base_bdev": "Nvme0n1", 00:22:12.959 "block_size": 4096, 00:22:12.959 "cluster_size": 1073741824, 00:22:12.959 "free_clusters": 0, 00:22:12.959 "name": "lvs_0", 00:22:12.959 "total_data_clusters": 4, 00:22:12.959 "uuid": "a7619049-bc21-4415-a744-b34f096d8906" 00:22:12.959 }, 00:22:12.959 { 00:22:12.959 "base_bdev": "2ca6fa78-4b59-43da-b4dc-4c31af5b42fa", 00:22:12.959 "block_size": 4096, 00:22:12.959 "cluster_size": 4194304, 00:22:12.959 "free_clusters": 1022, 00:22:12.959 "name": "lvs_n_0", 00:22:12.959 "total_data_clusters": 1022, 00:22:12.959 "uuid": "704f814d-8ca3-444a-a0fe-0a5521296a5b" 00:22:12.959 } 00:22:12.959 ]' 00:22:12.959 06:52:26 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="704f814d-8ca3-444a-a0fe-0a5521296a5b") .free_clusters' 00:22:12.959 06:52:26 -- common/autotest_common.sh@1358 -- # fc=1022 00:22:12.959 06:52:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="704f814d-8ca3-444a-a0fe-0a5521296a5b") .cluster_size' 00:22:13.219 4088 00:22:13.219 06:52:26 -- common/autotest_common.sh@1359 -- # cs=4194304 00:22:13.219 06:52:26 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:22:13.219 06:52:26 -- common/autotest_common.sh@1363 -- # echo 4088 00:22:13.219 06:52:26 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:13.219 1efb30f0-d964-452d-9f1d-a0532945748e 00:22:13.479 06:52:27 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:13.479 06:52:27 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:14.047 06:52:27 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:14.047 06:52:27 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.047 06:52:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.047 06:52:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:22:14.047 06:52:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.047 06:52:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:22:14.047 06:52:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:14.047 06:52:27 -- common/autotest_common.sh@1330 -- # shift 00:22:14.047 06:52:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:22:14.047 06:52:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:14.047 06:52:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:14.047 06:52:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:22:14.047 06:52:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:22:14.047 06:52:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:22:14.047 06:52:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:22:14.047 06:52:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:14.047 06:52:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.306 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:14.306 fio-3.35 00:22:14.306 Starting 1 thread 00:22:16.846 00:22:16.846 test: (groupid=0, jobs=1): err= 0: pid=84685: Sat Dec 14 06:52:30 2024 00:22:16.846 read: IOPS=5895, BW=23.0MiB/s (24.1MB/s)(46.3MiB/2009msec) 00:22:16.846 slat (nsec): min=1957, max=352432, avg=2518.33, stdev=4224.90 00:22:16.846 clat (usec): min=4676, max=20692, avg=11538.38, stdev=1084.57 00:22:16.846 lat (usec): min=4686, max=20695, avg=11540.90, stdev=1084.33 00:22:16.846 clat percentiles (usec): 00:22:16.846 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:22:16.846 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:22:16.846 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13304], 00:22:16.846 | 99.00th=[14222], 99.50th=[14484], 99.90th=[19268], 99.95th=[20317], 00:22:16.846 | 99.99th=[20579] 00:22:16.846 bw ( KiB/s): min=22674, max=23976, per=99.88%, avg=23556.50, stdev=600.55, samples=4 00:22:16.846 iops : min= 5668, max= 5994, avg=5889.00, stdev=150.38, samples=4 00:22:16.846 write: IOPS=5895, BW=23.0MiB/s (24.1MB/s)(46.3MiB/2009msec); 0 zone resets 00:22:16.846 slat (usec): min=2, max=265, avg= 2.62, stdev= 2.96 00:22:16.846 clat (usec): min=2389, max=17926, avg=10087.55, stdev=930.39 00:22:16.846 lat (usec): min=2400, max=17928, avg=10090.17, stdev=930.22 00:22:16.846 clat percentiles (usec): 00:22:16.846 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:22:16.846 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:22:16.846 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:22:16.846 | 99.00th=[11994], 99.50th=[12387], 99.90th=[16909], 99.95th=[17695], 00:22:16.846 | 99.99th=[17957] 00:22:16.846 bw ( KiB/s): min=23488, max=23616, per=99.84%, avg=23546.00, stdev=53.62, samples=4 00:22:16.846 iops : min= 5872, max= 5904, avg=5886.50, stdev=13.40, samples=4 00:22:16.846 lat (msec) : 4=0.04%, 10=25.85%, 20=74.08%, 50=0.03% 00:22:16.846 cpu : usr=73.66%, sys=20.57%, ctx=8, majf=0, minf=5 00:22:16.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:16.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.847 issued rwts: total=11845,11845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.847 00:22:16.847 Run status group 0 (all jobs): 00:22:16.847 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.3MiB (48.5MB), run=2009-2009msec 00:22:16.847 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.3MiB (48.5MB), run=2009-2009msec 00:22:16.847 06:52:30 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:16.847 06:52:30 -- host/fio.sh@74 -- # sync 00:22:16.847 06:52:30 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:17.137 06:52:30 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:17.407 06:52:31 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:17.666 06:52:31 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:17.666 06:52:31 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:19.043 06:52:32 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:19.043 06:52:32 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:19.043 06:52:32 -- host/fio.sh@86 -- # nvmftestfini 00:22:19.043 06:52:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.043 06:52:32 -- nvmf/common.sh@116 -- # sync 00:22:19.043 06:52:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.043 06:52:32 -- nvmf/common.sh@119 -- # set +e 00:22:19.043 06:52:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.043 06:52:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.043 rmmod nvme_tcp 00:22:19.043 rmmod nvme_fabrics 00:22:19.043 rmmod nvme_keyring 00:22:19.043 06:52:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.043 06:52:32 -- nvmf/common.sh@123 -- # set -e 00:22:19.043 06:52:32 -- nvmf/common.sh@124 -- # return 0 00:22:19.043 06:52:32 -- nvmf/common.sh@477 -- # '[' -n 84232 ']' 00:22:19.043 06:52:32 -- nvmf/common.sh@478 -- # killprocess 84232 00:22:19.043 06:52:32 -- common/autotest_common.sh@936 -- # '[' -z 84232 ']' 00:22:19.043 06:52:32 -- common/autotest_common.sh@940 -- # kill -0 84232 00:22:19.043 06:52:32 -- common/autotest_common.sh@941 -- # uname 00:22:19.043 06:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.043 06:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84232 00:22:19.043 killing process with pid 84232 00:22:19.043 06:52:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:19.043 06:52:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:19.043 06:52:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84232' 00:22:19.043 06:52:32 -- common/autotest_common.sh@955 -- # kill 84232 00:22:19.043 06:52:32 -- common/autotest_common.sh@960 -- # wait 84232 00:22:19.303 06:52:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:19.303 06:52:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:19.303 06:52:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:19.303 06:52:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.303 06:52:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:19.303 06:52:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.303 06:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.303 06:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.303 06:52:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:19.303 00:22:19.303 real 0m20.256s 00:22:19.303 user 1m27.921s 00:22:19.303 sys 0m4.551s 00:22:19.303 06:52:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:19.303 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:22:19.303 ************************************ 00:22:19.303 END TEST nvmf_fio_host 00:22:19.303 ************************************ 00:22:19.562 06:52:33 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:19.562 06:52:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:19.562 06:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:19.562 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:22:19.562 ************************************ 00:22:19.562 START TEST nvmf_failover 00:22:19.562 ************************************ 00:22:19.562 06:52:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:19.562 * Looking for test storage... 00:22:19.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:19.562 06:52:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:19.562 06:52:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:19.562 06:52:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:19.562 06:52:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:19.562 06:52:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:19.562 06:52:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:19.562 06:52:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:19.562 06:52:33 -- scripts/common.sh@335 -- # IFS=.-: 00:22:19.562 06:52:33 -- scripts/common.sh@335 -- # read -ra ver1 00:22:19.562 06:52:33 -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.562 06:52:33 -- scripts/common.sh@336 -- # read -ra ver2 00:22:19.562 06:52:33 -- scripts/common.sh@337 -- # local 'op=<' 00:22:19.562 06:52:33 -- scripts/common.sh@339 -- # ver1_l=2 00:22:19.563 06:52:33 -- scripts/common.sh@340 -- # ver2_l=1 00:22:19.563 06:52:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:19.563 06:52:33 -- scripts/common.sh@343 -- # case "$op" in 00:22:19.563 06:52:33 -- scripts/common.sh@344 -- # : 1 00:22:19.563 06:52:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:19.563 06:52:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.563 06:52:33 -- scripts/common.sh@364 -- # decimal 1 00:22:19.563 06:52:33 -- scripts/common.sh@352 -- # local d=1 00:22:19.563 06:52:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.563 06:52:33 -- scripts/common.sh@354 -- # echo 1 00:22:19.563 06:52:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:19.563 06:52:33 -- scripts/common.sh@365 -- # decimal 2 00:22:19.563 06:52:33 -- scripts/common.sh@352 -- # local d=2 00:22:19.563 06:52:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.563 06:52:33 -- scripts/common.sh@354 -- # echo 2 00:22:19.563 06:52:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:19.563 06:52:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:19.563 06:52:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:19.563 06:52:33 -- scripts/common.sh@367 -- # return 0 00:22:19.563 06:52:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.563 06:52:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:19.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.563 --rc genhtml_branch_coverage=1 00:22:19.563 --rc genhtml_function_coverage=1 00:22:19.563 --rc genhtml_legend=1 00:22:19.563 --rc geninfo_all_blocks=1 00:22:19.563 --rc geninfo_unexecuted_blocks=1 00:22:19.563 00:22:19.563 ' 00:22:19.563 06:52:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:19.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.563 --rc genhtml_branch_coverage=1 00:22:19.563 --rc genhtml_function_coverage=1 00:22:19.563 --rc genhtml_legend=1 00:22:19.563 --rc geninfo_all_blocks=1 00:22:19.563 --rc geninfo_unexecuted_blocks=1 00:22:19.563 00:22:19.563 ' 00:22:19.563 06:52:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:19.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.563 --rc genhtml_branch_coverage=1 00:22:19.563 --rc genhtml_function_coverage=1 00:22:19.563 --rc genhtml_legend=1 00:22:19.563 --rc geninfo_all_blocks=1 00:22:19.563 --rc geninfo_unexecuted_blocks=1 00:22:19.563 00:22:19.563 ' 00:22:19.563 06:52:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:19.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.563 --rc genhtml_branch_coverage=1 00:22:19.563 --rc genhtml_function_coverage=1 00:22:19.563 --rc genhtml_legend=1 00:22:19.563 --rc geninfo_all_blocks=1 00:22:19.563 --rc geninfo_unexecuted_blocks=1 00:22:19.563 00:22:19.563 ' 00:22:19.563 06:52:33 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.563 06:52:33 -- nvmf/common.sh@7 -- # uname -s 00:22:19.563 06:52:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.563 06:52:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.563 06:52:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.563 06:52:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.563 06:52:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.563 06:52:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.563 06:52:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.563 06:52:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.563 06:52:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.563 06:52:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:22:19.563 06:52:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:22:19.563 06:52:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.563 06:52:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.563 06:52:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.563 06:52:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.563 06:52:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.563 06:52:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.563 06:52:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.563 06:52:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.563 06:52:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.563 06:52:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.563 06:52:33 -- paths/export.sh@5 -- # export PATH 00:22:19.563 06:52:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.563 06:52:33 -- nvmf/common.sh@46 -- # : 0 00:22:19.563 06:52:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:19.563 06:52:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:19.563 06:52:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:19.563 06:52:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.563 06:52:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.563 06:52:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:19.563 06:52:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:19.563 06:52:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:19.563 06:52:33 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.563 06:52:33 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.563 06:52:33 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.563 06:52:33 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.563 06:52:33 -- host/failover.sh@18 -- # nvmftestinit 00:22:19.563 06:52:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:19.563 06:52:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.563 06:52:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:19.563 06:52:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:19.563 06:52:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:19.563 06:52:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.563 06:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.563 06:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.563 06:52:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:19.563 06:52:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:19.563 06:52:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.563 06:52:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.563 06:52:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:19.563 06:52:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:19.563 06:52:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.563 06:52:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.563 06:52:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.563 06:52:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.563 06:52:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.563 06:52:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.563 06:52:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.563 06:52:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.563 06:52:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:19.563 06:52:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:19.563 Cannot find device "nvmf_tgt_br" 00:22:19.563 06:52:33 -- nvmf/common.sh@154 -- # true 00:22:19.564 06:52:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.564 Cannot find device "nvmf_tgt_br2" 00:22:19.564 06:52:33 -- nvmf/common.sh@155 -- # true 00:22:19.564 06:52:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:19.564 06:52:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:19.823 Cannot find device "nvmf_tgt_br" 00:22:19.823 06:52:33 -- nvmf/common.sh@157 -- # true 00:22:19.823 06:52:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:19.823 Cannot find device "nvmf_tgt_br2" 00:22:19.823 06:52:33 -- nvmf/common.sh@158 -- # true 00:22:19.823 06:52:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:19.823 06:52:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:19.823 06:52:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.823 06:52:33 -- nvmf/common.sh@161 -- # true 00:22:19.823 06:52:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.823 06:52:33 -- nvmf/common.sh@162 -- # true 00:22:19.823 06:52:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.823 06:52:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.823 06:52:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.823 06:52:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.823 06:52:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.823 06:52:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.823 06:52:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.823 06:52:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:19.823 06:52:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:19.823 06:52:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:19.823 06:52:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:19.823 06:52:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:19.823 06:52:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:19.823 06:52:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.823 06:52:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.823 06:52:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.823 06:52:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:19.823 06:52:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:19.823 06:52:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.823 06:52:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.823 06:52:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.823 06:52:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.823 06:52:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.823 06:52:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:20.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:22:20.083 00:22:20.083 --- 10.0.0.2 ping statistics --- 00:22:20.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.083 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:20.083 06:52:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:20.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:20.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:20.083 00:22:20.083 --- 10.0.0.3 ping statistics --- 00:22:20.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.083 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:20.083 06:52:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:20.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:22:20.083 00:22:20.083 --- 10.0.0.1 ping statistics --- 00:22:20.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.083 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:20.083 06:52:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.083 06:52:33 -- nvmf/common.sh@421 -- # return 0 00:22:20.083 06:52:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:20.083 06:52:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.083 06:52:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:20.083 06:52:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:20.083 06:52:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.083 06:52:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:20.083 06:52:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:20.083 06:52:33 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:20.083 06:52:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:20.083 06:52:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.083 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:22:20.083 06:52:33 -- nvmf/common.sh@469 -- # nvmfpid=84966 00:22:20.083 06:52:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:20.083 06:52:33 -- nvmf/common.sh@470 -- # waitforlisten 84966 00:22:20.083 06:52:33 -- common/autotest_common.sh@829 -- # '[' -z 84966 ']' 00:22:20.083 06:52:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.083 06:52:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.083 06:52:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.083 06:52:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.083 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:22:20.083 [2024-12-14 06:52:33.923933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:20.083 [2024-12-14 06:52:33.924060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.083 [2024-12-14 06:52:34.062151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.342 [2024-12-14 06:52:34.188347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:20.342 [2024-12-14 06:52:34.188507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.343 [2024-12-14 06:52:34.188519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.343 [2024-12-14 06:52:34.188527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.343 [2024-12-14 06:52:34.188693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.343 [2024-12-14 06:52:34.189353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.343 [2024-12-14 06:52:34.189405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.280 06:52:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.280 06:52:34 -- common/autotest_common.sh@862 -- # return 0 00:22:21.280 06:52:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:21.280 06:52:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:21.280 06:52:34 -- common/autotest_common.sh@10 -- # set +x 00:22:21.280 06:52:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.280 06:52:34 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:21.280 [2024-12-14 06:52:35.266517] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.538 06:52:35 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.797 Malloc0 00:22:21.797 06:52:35 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.056 06:52:35 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.056 06:52:36 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.315 [2024-12-14 06:52:36.280082] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.315 06:52:36 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.575 [2024-12-14 06:52:36.552273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.834 06:52:36 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:22.834 [2024-12-14 06:52:36.784493] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:22.834 06:52:36 -- host/failover.sh@31 -- # bdevperf_pid=85082 00:22:22.834 06:52:36 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:22.834 06:52:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.834 06:52:36 -- host/failover.sh@34 -- # waitforlisten 85082 /var/tmp/bdevperf.sock 00:22:22.834 06:52:36 -- common/autotest_common.sh@829 -- # '[' -z 85082 ']' 00:22:22.834 06:52:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.834 06:52:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.834 06:52:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.834 06:52:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.834 06:52:36 -- common/autotest_common.sh@10 -- # set +x 00:22:24.290 06:52:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.291 06:52:37 -- common/autotest_common.sh@862 -- # return 0 00:22:24.291 06:52:37 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.291 NVMe0n1 00:22:24.291 06:52:38 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.563 00:22:24.563 06:52:38 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.563 06:52:38 -- host/failover.sh@39 -- # run_test_pid=85131 00:22:24.563 06:52:38 -- host/failover.sh@41 -- # sleep 1 00:22:25.942 06:52:39 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.942 [2024-12-14 06:52:39.714052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.942 [2024-12-14 06:52:39.714218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 [2024-12-14 06:52:39.714561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151f5b0 is same with the state(5) to be set 00:22:25.943 06:52:39 -- host/failover.sh@45 -- # sleep 3 00:22:29.232 06:52:42 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.232 00:22:29.232 06:52:43 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:29.492 [2024-12-14 06:52:43.331607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.331978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 [2024-12-14 06:52:43.332092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520420 is same with the state(5) to be set 00:22:29.492 06:52:43 -- host/failover.sh@50 -- # sleep 3 00:22:32.781 06:52:46 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.781 [2024-12-14 06:52:46.604216] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.781 06:52:46 -- host/failover.sh@55 -- # sleep 1 00:22:33.719 06:52:47 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:33.979 [2024-12-14 06:52:47.931560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.979 [2024-12-14 06:52:47.931853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.931989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.932012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.932023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.932032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.932040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 [2024-12-14 06:52:47.932048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520fb0 is same with the state(5) to be set 00:22:33.980 06:52:47 -- host/failover.sh@59 -- # wait 85131 00:22:40.553 0 00:22:40.553 06:52:53 -- host/failover.sh@61 -- # killprocess 85082 00:22:40.553 06:52:53 -- common/autotest_common.sh@936 -- # '[' -z 85082 ']' 00:22:40.553 06:52:53 -- common/autotest_common.sh@940 -- # kill -0 85082 00:22:40.553 06:52:53 -- common/autotest_common.sh@941 -- # uname 00:22:40.553 06:52:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.553 06:52:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85082 00:22:40.553 06:52:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:40.553 06:52:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:40.553 06:52:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85082' 00:22:40.553 killing process with pid 85082 00:22:40.553 06:52:53 -- common/autotest_common.sh@955 -- # kill 85082 00:22:40.554 06:52:53 -- common/autotest_common.sh@960 -- # wait 85082 00:22:40.554 06:52:54 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:40.554 [2024-12-14 06:52:36.854681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:40.554 [2024-12-14 06:52:36.854802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85082 ] 00:22:40.554 [2024-12-14 06:52:36.987337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.554 [2024-12-14 06:52:37.104820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.554 Running I/O for 15 seconds... 00:22:40.554 [2024-12-14 06:52:39.714882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.714934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.714979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.714996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.715968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.715985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.716010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.716030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.716052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.716069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.554 [2024-12-14 06:52:39.716083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.554 [2024-12-14 06:52:39.716102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.716920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.716963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.716982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.555 [2024-12-14 06:52:39.717044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.555 [2024-12-14 06:52:39.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.555 [2024-12-14 06:52:39.717259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.717916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.717977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.717991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.556 [2024-12-14 06:52:39.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.556 [2024-12-14 06:52:39.718387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.556 [2024-12-14 06:52:39.718403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.557 [2024-12-14 06:52:39.718447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.557 [2024-12-14 06:52:39.718549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.718977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.718999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.719053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.719108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:39.719139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b089a0 is same with the state(5) to be set 00:22:40.557 [2024-12-14 06:52:39.719179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:40.557 [2024-12-14 06:52:39.719191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:40.557 [2024-12-14 06:52:39.719203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129920 len:8 PRP1 0x0 PRP2 0x0 00:22:40.557 [2024-12-14 06:52:39.719217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719288] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b089a0 was disconnected and freed. reset controller. 00:22:40.557 [2024-12-14 06:52:39.719308] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:40.557 [2024-12-14 06:52:39.719368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.557 [2024-12-14 06:52:39.719392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.557 [2024-12-14 06:52:39.719422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.557 [2024-12-14 06:52:39.719451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.557 [2024-12-14 06:52:39.719480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:39.719495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.557 [2024-12-14 06:52:39.721980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.557 [2024-12-14 06:52:39.722019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93440 (9): Bad file descriptor 00:22:40.557 [2024-12-14 06:52:39.749414] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.557 [2024-12-14 06:52:43.332194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.557 [2024-12-14 06:52:43.332544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.557 [2024-12-14 06:52:43.332558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.332940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.332970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.558 [2024-12-14 06:52:43.333614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.558 [2024-12-14 06:52:43.333627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.333655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.333873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.333939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.333968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.559 [2024-12-14 06:52:43.334611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.559 [2024-12-14 06:52:43.334934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.559 [2024-12-14 06:52:43.334946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.334977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.334990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.335934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.335974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.335990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.336004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.336033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.336081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.336161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.560 [2024-12-14 06:52:43.336190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.336219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.560 [2024-12-14 06:52:43.336240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.560 [2024-12-14 06:52:43.336254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.561 [2024-12-14 06:52:43.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.561 [2024-12-14 06:52:43.336343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.561 [2024-12-14 06:52:43.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.561 [2024-12-14 06:52:43.336414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.561 [2024-12-14 06:52:43.336452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.336942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:43.336966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8bae0 is same with the state(5) to be set 00:22:40.561 [2024-12-14 06:52:43.337030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:40.561 [2024-12-14 06:52:43.337045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:40.561 [2024-12-14 06:52:43.337063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:8 PRP1 0x0 PRP2 0x0 00:22:40.561 [2024-12-14 06:52:43.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337148] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8bae0 was disconnected and freed. reset controller. 00:22:40.561 [2024-12-14 06:52:43.337167] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:40.561 [2024-12-14 06:52:43.337227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:43.337250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:43.337288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:43.337316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:43.337343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:43.337364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.561 [2024-12-14 06:52:43.337429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93440 (9): Bad file descriptor 00:22:40.561 [2024-12-14 06:52:43.340299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.561 [2024-12-14 06:52:43.369272] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.561 [2024-12-14 06:52:47.931185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:47.931264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.931286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:47.931315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.931329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:47.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.931370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.561 [2024-12-14 06:52:47.931397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.931410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93440 is same with the state(5) to be set 00:22:40.561 [2024-12-14 06:52:47.932157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:47.932187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.932211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:47.932227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.932242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:47.932256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.932271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.561 [2024-12-14 06:52:47.932284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.561 [2024-12-14 06:52:47.932314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.932736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.932816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.933159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.562 [2024-12-14 06:52:47.933320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.933349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.933376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.562 [2024-12-14 06:52:47.933403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.562 [2024-12-14 06:52:47.933418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.933787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.933867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.933975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.933990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.934017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.934052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.934126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.934155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.563 [2024-12-14 06:52:47.934182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.934210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.934238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.563 [2024-12-14 06:52:47.934253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.563 [2024-12-14 06:52:47.934266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.934900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.934941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.934976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.564 [2024-12-14 06:52:47.935407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.564 [2024-12-14 06:52:47.935452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.564 [2024-12-14 06:52:47.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.565 [2024-12-14 06:52:47.935889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.935979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.935995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.936009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.936059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.936127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.936173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.565 [2024-12-14 06:52:47.936211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b061f0 is same with the state(5) to be set 00:22:40.565 [2024-12-14 06:52:47.936243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:40.565 [2024-12-14 06:52:47.936271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:40.565 [2024-12-14 06:52:47.936282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30560 len:8 PRP1 0x0 PRP2 0x0 00:22:40.565 [2024-12-14 06:52:47.936296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.565 [2024-12-14 06:52:47.936395] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b061f0 was disconnected and freed. reset controller. 00:22:40.565 [2024-12-14 06:52:47.936414] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:40.565 [2024-12-14 06:52:47.936430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.565 [2024-12-14 06:52:47.938864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.565 [2024-12-14 06:52:47.938907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93440 (9): Bad file descriptor 00:22:40.565 [2024-12-14 06:52:47.968906] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.565 00:22:40.565 Latency(us) 00:22:40.565 [2024-12-14T06:52:54.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.565 [2024-12-14T06:52:54.557Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:40.565 Verification LBA range: start 0x0 length 0x4000 00:22:40.565 NVMe0n1 : 15.01 13293.58 51.93 296.19 0.00 9402.04 588.33 15132.86 00:22:40.565 [2024-12-14T06:52:54.557Z] =================================================================================================================== 00:22:40.565 [2024-12-14T06:52:54.557Z] Total : 13293.58 51.93 296.19 0.00 9402.04 588.33 15132.86 00:22:40.565 Received shutdown signal, test time was about 15.000000 seconds 00:22:40.565 00:22:40.565 Latency(us) 00:22:40.565 [2024-12-14T06:52:54.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.565 [2024-12-14T06:52:54.557Z] =================================================================================================================== 00:22:40.565 [2024-12-14T06:52:54.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.565 06:52:54 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:40.565 06:52:54 -- host/failover.sh@65 -- # count=3 00:22:40.565 06:52:54 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:40.565 06:52:54 -- host/failover.sh@73 -- # bdevperf_pid=85333 00:22:40.565 06:52:54 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:40.565 06:52:54 -- host/failover.sh@75 -- # waitforlisten 85333 /var/tmp/bdevperf.sock 00:22:40.565 06:52:54 -- common/autotest_common.sh@829 -- # '[' -z 85333 ']' 00:22:40.565 06:52:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.565 06:52:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.565 06:52:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.565 06:52:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.565 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:22:41.134 06:52:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.134 06:52:55 -- common/autotest_common.sh@862 -- # return 0 00:22:41.134 06:52:55 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:41.393 [2024-12-14 06:52:55.339694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:41.393 06:52:55 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:41.652 [2024-12-14 06:52:55.576068] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:41.652 06:52:55 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.911 NVMe0n1 00:22:42.171 06:52:55 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.429 00:22:42.429 06:52:56 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.688 00:22:42.688 06:52:56 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.688 06:52:56 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:42.947 06:52:56 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.207 06:52:57 -- host/failover.sh@87 -- # sleep 3 00:22:46.494 06:53:00 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.494 06:53:00 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:46.494 06:53:00 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.494 06:53:00 -- host/failover.sh@90 -- # run_test_pid=85471 00:22:46.494 06:53:00 -- host/failover.sh@92 -- # wait 85471 00:22:47.868 0 00:22:47.868 06:53:01 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:47.868 [2024-12-14 06:52:54.072366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:47.868 [2024-12-14 06:52:54.072534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85333 ] 00:22:47.868 [2024-12-14 06:52:54.212790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.868 [2024-12-14 06:52:54.341572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.868 [2024-12-14 06:52:57.008555] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:47.868 [2024-12-14 06:52:57.008731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.868 [2024-12-14 06:52:57.008757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.868 [2024-12-14 06:52:57.008778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.868 [2024-12-14 06:52:57.008792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.868 [2024-12-14 06:52:57.008805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.868 [2024-12-14 06:52:57.008819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.868 [2024-12-14 06:52:57.008833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.868 [2024-12-14 06:52:57.008857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.868 [2024-12-14 06:52:57.008871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.868 [2024-12-14 06:52:57.008934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.868 [2024-12-14 06:52:57.009019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4440 (9): Bad file descriptor 00:22:47.868 [2024-12-14 06:52:57.020283] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.868 Running I/O for 1 seconds... 00:22:47.868 00:22:47.868 Latency(us) 00:22:47.868 [2024-12-14T06:53:01.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.868 [2024-12-14T06:53:01.860Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.868 Verification LBA range: start 0x0 length 0x4000 00:22:47.868 NVMe0n1 : 1.01 11688.18 45.66 0.00 0.00 10899.14 1005.38 14596.65 00:22:47.868 [2024-12-14T06:53:01.860Z] =================================================================================================================== 00:22:47.868 [2024-12-14T06:53:01.860Z] Total : 11688.18 45.66 0.00 0.00 10899.14 1005.38 14596.65 00:22:47.868 06:53:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:47.868 06:53:01 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.868 06:53:01 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.127 06:53:02 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:48.127 06:53:02 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:48.388 06:53:02 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.653 06:53:02 -- host/failover.sh@101 -- # sleep 3 00:22:51.949 06:53:05 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.949 06:53:05 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:51.949 06:53:05 -- host/failover.sh@108 -- # killprocess 85333 00:22:51.949 06:53:05 -- common/autotest_common.sh@936 -- # '[' -z 85333 ']' 00:22:51.949 06:53:05 -- common/autotest_common.sh@940 -- # kill -0 85333 00:22:51.949 06:53:05 -- common/autotest_common.sh@941 -- # uname 00:22:51.949 06:53:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.949 06:53:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85333 00:22:51.949 06:53:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.949 06:53:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.949 06:53:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85333' 00:22:51.949 killing process with pid 85333 00:22:51.949 06:53:05 -- common/autotest_common.sh@955 -- # kill 85333 00:22:51.949 06:53:05 -- common/autotest_common.sh@960 -- # wait 85333 00:22:52.208 06:53:06 -- host/failover.sh@110 -- # sync 00:22:52.467 06:53:06 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.726 06:53:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:52.726 06:53:06 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:52.726 06:53:06 -- host/failover.sh@116 -- # nvmftestfini 00:22:52.726 06:53:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:52.726 06:53:06 -- nvmf/common.sh@116 -- # sync 00:22:52.726 06:53:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:52.726 06:53:06 -- nvmf/common.sh@119 -- # set +e 00:22:52.726 06:53:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:52.726 06:53:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:52.726 rmmod nvme_tcp 00:22:52.726 rmmod nvme_fabrics 00:22:52.726 rmmod nvme_keyring 00:22:52.726 06:53:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:52.726 06:53:06 -- nvmf/common.sh@123 -- # set -e 00:22:52.726 06:53:06 -- nvmf/common.sh@124 -- # return 0 00:22:52.726 06:53:06 -- nvmf/common.sh@477 -- # '[' -n 84966 ']' 00:22:52.726 06:53:06 -- nvmf/common.sh@478 -- # killprocess 84966 00:22:52.726 06:53:06 -- common/autotest_common.sh@936 -- # '[' -z 84966 ']' 00:22:52.726 06:53:06 -- common/autotest_common.sh@940 -- # kill -0 84966 00:22:52.726 06:53:06 -- common/autotest_common.sh@941 -- # uname 00:22:52.726 06:53:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.726 06:53:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84966 00:22:52.726 06:53:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:52.726 06:53:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:52.726 killing process with pid 84966 00:22:52.726 06:53:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84966' 00:22:52.726 06:53:06 -- common/autotest_common.sh@955 -- # kill 84966 00:22:52.726 06:53:06 -- common/autotest_common.sh@960 -- # wait 84966 00:22:53.294 06:53:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.294 06:53:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.295 06:53:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.295 06:53:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.295 06:53:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.295 06:53:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.295 06:53:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.295 06:53:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.295 06:53:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:53.295 ************************************ 00:22:53.295 END TEST nvmf_failover 00:22:53.295 ************************************ 00:22:53.295 00:22:53.295 real 0m33.703s 00:22:53.295 user 2m10.257s 00:22:53.295 sys 0m5.163s 00:22:53.295 06:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:53.295 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:22:53.295 06:53:07 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.295 06:53:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:53.295 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:22:53.295 ************************************ 00:22:53.295 START TEST nvmf_discovery 00:22:53.295 ************************************ 00:22:53.295 06:53:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.295 * Looking for test storage... 00:22:53.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.295 06:53:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:53.295 06:53:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:53.295 06:53:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:53.295 06:53:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:53.295 06:53:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:53.295 06:53:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:53.295 06:53:07 -- scripts/common.sh@335 -- # IFS=.-: 00:22:53.295 06:53:07 -- scripts/common.sh@335 -- # read -ra ver1 00:22:53.295 06:53:07 -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.295 06:53:07 -- scripts/common.sh@336 -- # read -ra ver2 00:22:53.295 06:53:07 -- scripts/common.sh@337 -- # local 'op=<' 00:22:53.295 06:53:07 -- scripts/common.sh@339 -- # ver1_l=2 00:22:53.295 06:53:07 -- scripts/common.sh@340 -- # ver2_l=1 00:22:53.295 06:53:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:53.295 06:53:07 -- scripts/common.sh@343 -- # case "$op" in 00:22:53.295 06:53:07 -- scripts/common.sh@344 -- # : 1 00:22:53.295 06:53:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:53.295 06:53:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.295 06:53:07 -- scripts/common.sh@364 -- # decimal 1 00:22:53.295 06:53:07 -- scripts/common.sh@352 -- # local d=1 00:22:53.295 06:53:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.295 06:53:07 -- scripts/common.sh@354 -- # echo 1 00:22:53.295 06:53:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:53.295 06:53:07 -- scripts/common.sh@365 -- # decimal 2 00:22:53.295 06:53:07 -- scripts/common.sh@352 -- # local d=2 00:22:53.295 06:53:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.295 06:53:07 -- scripts/common.sh@354 -- # echo 2 00:22:53.295 06:53:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:53.295 06:53:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:53.295 06:53:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:53.295 06:53:07 -- scripts/common.sh@367 -- # return 0 00:22:53.295 06:53:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.295 --rc genhtml_branch_coverage=1 00:22:53.295 --rc genhtml_function_coverage=1 00:22:53.295 --rc genhtml_legend=1 00:22:53.295 --rc geninfo_all_blocks=1 00:22:53.295 --rc geninfo_unexecuted_blocks=1 00:22:53.295 00:22:53.295 ' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.295 --rc genhtml_branch_coverage=1 00:22:53.295 --rc genhtml_function_coverage=1 00:22:53.295 --rc genhtml_legend=1 00:22:53.295 --rc geninfo_all_blocks=1 00:22:53.295 --rc geninfo_unexecuted_blocks=1 00:22:53.295 00:22:53.295 ' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.295 --rc genhtml_branch_coverage=1 00:22:53.295 --rc genhtml_function_coverage=1 00:22:53.295 --rc genhtml_legend=1 00:22:53.295 --rc geninfo_all_blocks=1 00:22:53.295 --rc geninfo_unexecuted_blocks=1 00:22:53.295 00:22:53.295 ' 00:22:53.295 06:53:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.295 --rc genhtml_branch_coverage=1 00:22:53.295 --rc genhtml_function_coverage=1 00:22:53.295 --rc genhtml_legend=1 00:22:53.295 --rc geninfo_all_blocks=1 00:22:53.295 --rc geninfo_unexecuted_blocks=1 00:22:53.295 00:22:53.295 ' 00:22:53.295 06:53:07 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.295 06:53:07 -- nvmf/common.sh@7 -- # uname -s 00:22:53.295 06:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.295 06:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.295 06:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.295 06:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.295 06:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.295 06:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.295 06:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.295 06:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.295 06:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.295 06:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.295 06:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:22:53.295 06:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:22:53.295 06:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.295 06:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.295 06:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.295 06:53:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.295 06:53:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.295 06:53:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.295 06:53:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.295 06:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.295 06:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.295 06:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.295 06:53:07 -- paths/export.sh@5 -- # export PATH 00:22:53.295 06:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.295 06:53:07 -- nvmf/common.sh@46 -- # : 0 00:22:53.295 06:53:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:53.295 06:53:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:53.295 06:53:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:53.295 06:53:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.295 06:53:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.295 06:53:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:53.295 06:53:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:53.296 06:53:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:53.296 06:53:07 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:53.296 06:53:07 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:53.296 06:53:07 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:53.296 06:53:07 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:53.296 06:53:07 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:53.296 06:53:07 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:53.296 06:53:07 -- host/discovery.sh@25 -- # nvmftestinit 00:22:53.296 06:53:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:53.296 06:53:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.296 06:53:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:53.296 06:53:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:53.296 06:53:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:53.296 06:53:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.296 06:53:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.296 06:53:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.296 06:53:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:53.296 06:53:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:53.296 06:53:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:53.296 06:53:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:53.296 06:53:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:53.296 06:53:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:53.296 06:53:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.296 06:53:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.296 06:53:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:53.296 06:53:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:53.296 06:53:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.296 06:53:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.296 06:53:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.296 06:53:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.296 06:53:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.296 06:53:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.296 06:53:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.296 06:53:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.296 06:53:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:53.296 06:53:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:53.555 Cannot find device "nvmf_tgt_br" 00:22:53.555 06:53:07 -- nvmf/common.sh@154 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.555 Cannot find device "nvmf_tgt_br2" 00:22:53.555 06:53:07 -- nvmf/common.sh@155 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:53.555 06:53:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:53.555 Cannot find device "nvmf_tgt_br" 00:22:53.555 06:53:07 -- nvmf/common.sh@157 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:53.555 Cannot find device "nvmf_tgt_br2" 00:22:53.555 06:53:07 -- nvmf/common.sh@158 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:53.555 06:53:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:53.555 06:53:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.555 06:53:07 -- nvmf/common.sh@161 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.555 06:53:07 -- nvmf/common.sh@162 -- # true 00:22:53.555 06:53:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.555 06:53:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.555 06:53:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.555 06:53:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.555 06:53:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.555 06:53:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.555 06:53:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.555 06:53:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:53.555 06:53:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:53.555 06:53:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:53.555 06:53:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:53.555 06:53:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:53.555 06:53:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:53.555 06:53:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.555 06:53:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:53.555 06:53:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:53.555 06:53:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:53.555 06:53:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:53.555 06:53:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:53.555 06:53:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:53.555 06:53:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:53.814 06:53:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:53.814 06:53:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:53.814 06:53:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:53.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:53.814 00:22:53.814 --- 10.0.0.2 ping statistics --- 00:22:53.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.814 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:53.814 06:53:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:53.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:53.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:22:53.814 00:22:53.814 --- 10.0.0.3 ping statistics --- 00:22:53.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.814 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:53.814 06:53:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:53.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:22:53.814 00:22:53.814 --- 10.0.0.1 ping statistics --- 00:22:53.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.814 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:53.814 06:53:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.814 06:53:07 -- nvmf/common.sh@421 -- # return 0 00:22:53.814 06:53:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:53.814 06:53:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.814 06:53:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:53.814 06:53:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:53.814 06:53:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.814 06:53:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:53.814 06:53:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:53.814 06:53:07 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:53.814 06:53:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:53.814 06:53:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:53.814 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:22:53.814 06:53:07 -- nvmf/common.sh@469 -- # nvmfpid=85784 00:22:53.814 06:53:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.814 06:53:07 -- nvmf/common.sh@470 -- # waitforlisten 85784 00:22:53.814 06:53:07 -- common/autotest_common.sh@829 -- # '[' -z 85784 ']' 00:22:53.814 06:53:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.814 06:53:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.814 06:53:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.814 06:53:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.814 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:22:53.814 [2024-12-14 06:53:07.669472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:53.814 [2024-12-14 06:53:07.669583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.073 [2024-12-14 06:53:07.810948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.073 [2024-12-14 06:53:07.905688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.073 [2024-12-14 06:53:07.905857] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.073 [2024-12-14 06:53:07.905870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.073 [2024-12-14 06:53:07.905877] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.073 [2024-12-14 06:53:07.905912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.640 06:53:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.640 06:53:08 -- common/autotest_common.sh@862 -- # return 0 00:22:54.640 06:53:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:54.640 06:53:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 06:53:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.640 06:53:08 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.640 06:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 [2024-12-14 06:53:08.581528] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.640 06:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.640 06:53:08 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:54.640 06:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 [2024-12-14 06:53:08.589655] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:54.640 06:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.640 06:53:08 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:54.640 06:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 null0 00:22:54.640 06:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.640 06:53:08 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:54.640 06:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 null1 00:22:54.640 06:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.640 06:53:08 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:54.640 06:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.640 06:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.640 06:53:08 -- host/discovery.sh@45 -- # hostpid=85830 00:22:54.640 06:53:08 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:54.640 06:53:08 -- host/discovery.sh@46 -- # waitforlisten 85830 /tmp/host.sock 00:22:54.640 06:53:08 -- common/autotest_common.sh@829 -- # '[' -z 85830 ']' 00:22:54.640 06:53:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:54.640 06:53:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.640 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:54.640 06:53:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:54.640 06:53:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.640 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.899 [2024-12-14 06:53:08.685683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:54.899 [2024-12-14 06:53:08.685764] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85830 ] 00:22:54.899 [2024-12-14 06:53:08.826893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.158 [2024-12-14 06:53:08.939076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:55.158 [2024-12-14 06:53:08.939239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.727 06:53:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.727 06:53:09 -- common/autotest_common.sh@862 -- # return 0 00:22:55.727 06:53:09 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.727 06:53:09 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:55.727 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.727 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.727 06:53:09 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:55.727 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.727 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.727 06:53:09 -- host/discovery.sh@72 -- # notify_id=0 00:22:55.727 06:53:09 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:55.727 06:53:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.727 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.727 06:53:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.727 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 06:53:09 -- host/discovery.sh@59 -- # sort 00:22:55.727 06:53:09 -- host/discovery.sh@59 -- # xargs 00:22:55.727 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:55.986 06:53:09 -- host/discovery.sh@79 -- # get_bdev_list 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # sort 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.986 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.986 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # xargs 00:22:55.986 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:55.986 06:53:09 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.986 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.986 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.986 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:55.986 06:53:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.986 06:53:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.986 06:53:09 -- host/discovery.sh@59 -- # sort 00:22:55.986 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.986 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.986 06:53:09 -- host/discovery.sh@59 -- # xargs 00:22:55.986 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:55.986 06:53:09 -- host/discovery.sh@83 -- # get_bdev_list 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # sort 00:22:55.986 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.986 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.986 06:53:09 -- host/discovery.sh@55 -- # xargs 00:22:55.986 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:55.986 06:53:09 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:55.986 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.986 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.986 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.986 06:53:09 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:55.986 06:53:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.987 06:53:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.987 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.987 06:53:09 -- host/discovery.sh@59 -- # sort 00:22:55.987 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:55.987 06:53:09 -- host/discovery.sh@59 -- # xargs 00:22:55.987 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:09 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:56.246 06:53:09 -- host/discovery.sh@87 -- # get_bdev_list 00:22:56.246 06:53:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.246 06:53:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 06:53:09 -- host/discovery.sh@55 -- # sort 00:22:56.246 06:53:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.246 06:53:09 -- host/discovery.sh@55 -- # xargs 00:22:56.246 06:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:56.246 06:53:10 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:56.246 06:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 [2024-12-14 06:53:10.038002] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.246 06:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:56.246 06:53:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.246 06:53:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.246 06:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:10 -- host/discovery.sh@59 -- # sort 00:22:56.246 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 06:53:10 -- host/discovery.sh@59 -- # xargs 00:22:56.246 06:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:56.246 06:53:10 -- host/discovery.sh@93 -- # get_bdev_list 00:22:56.246 06:53:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.246 06:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:10 -- host/discovery.sh@55 -- # xargs 00:22:56.246 06:53:10 -- host/discovery.sh@55 -- # sort 00:22:56.246 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 06:53:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.246 06:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:56.246 06:53:10 -- host/discovery.sh@94 -- # get_notification_count 00:22:56.246 06:53:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:56.246 06:53:10 -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.246 06:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 06:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@74 -- # notification_count=0 00:22:56.246 06:53:10 -- host/discovery.sh@75 -- # notify_id=0 00:22:56.246 06:53:10 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:56.246 06:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.246 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.246 06:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.246 06:53:10 -- host/discovery.sh@100 -- # sleep 1 00:22:56.815 [2024-12-14 06:53:10.684614] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:56.815 [2024-12-14 06:53:10.684659] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:56.815 [2024-12-14 06:53:10.684676] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:56.815 [2024-12-14 06:53:10.770698] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:57.074 [2024-12-14 06:53:10.826619] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:57.074 [2024-12-14 06:53:10.826864] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.333 06:53:11 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:57.333 06:53:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.333 06:53:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.333 06:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.333 06:53:11 -- host/discovery.sh@59 -- # sort 00:22:57.333 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 06:53:11 -- host/discovery.sh@59 -- # xargs 00:22:57.333 06:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.333 06:53:11 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.333 06:53:11 -- host/discovery.sh@102 -- # get_bdev_list 00:22:57.333 06:53:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.333 06:53:11 -- host/discovery.sh@55 -- # sort 00:22:57.333 06:53:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.333 06:53:11 -- host/discovery.sh@55 -- # xargs 00:22:57.333 06:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.333 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 06:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:57.593 06:53:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.593 06:53:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:57.593 06:53:11 -- host/discovery.sh@63 -- # sort -n 00:22:57.593 06:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.593 06:53:11 -- host/discovery.sh@63 -- # xargs 00:22:57.593 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.593 06:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@104 -- # get_notification_count 00:22:57.593 06:53:11 -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.593 06:53:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:57.593 06:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.593 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.593 06:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@74 -- # notification_count=1 00:22:57.593 06:53:11 -- host/discovery.sh@75 -- # notify_id=1 00:22:57.593 06:53:11 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:57.593 06:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.593 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.593 06:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.593 06:53:11 -- host/discovery.sh@109 -- # sleep 1 00:22:58.530 06:53:12 -- host/discovery.sh@110 -- # get_bdev_list 00:22:58.530 06:53:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.530 06:53:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.530 06:53:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.530 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.530 06:53:12 -- host/discovery.sh@55 -- # sort 00:22:58.530 06:53:12 -- host/discovery.sh@55 -- # xargs 00:22:58.530 06:53:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.530 06:53:12 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.530 06:53:12 -- host/discovery.sh@111 -- # get_notification_count 00:22:58.530 06:53:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:58.530 06:53:12 -- host/discovery.sh@74 -- # jq '. | length' 00:22:58.530 06:53:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.530 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.789 06:53:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.789 06:53:12 -- host/discovery.sh@74 -- # notification_count=1 00:22:58.789 06:53:12 -- host/discovery.sh@75 -- # notify_id=2 00:22:58.789 06:53:12 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:58.789 06:53:12 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:58.789 06:53:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.789 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.789 [2024-12-14 06:53:12.575107] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:58.789 [2024-12-14 06:53:12.576081] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.789 [2024-12-14 06:53:12.576129] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.789 06:53:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.790 06:53:12 -- host/discovery.sh@117 -- # sleep 1 00:22:58.790 [2024-12-14 06:53:12.662133] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:58.790 [2024-12-14 06:53:12.727410] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:58.790 [2024-12-14 06:53:12.727433] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.790 [2024-12-14 06:53:12.727439] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.726 06:53:13 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:59.726 06:53:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.726 06:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.726 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.726 06:53:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.726 06:53:13 -- host/discovery.sh@59 -- # sort 00:22:59.726 06:53:13 -- host/discovery.sh@59 -- # xargs 00:22:59.726 06:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.726 06:53:13 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.726 06:53:13 -- host/discovery.sh@119 -- # get_bdev_list 00:22:59.726 06:53:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.726 06:53:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.726 06:53:13 -- host/discovery.sh@55 -- # xargs 00:22:59.726 06:53:13 -- host/discovery.sh@55 -- # sort 00:22:59.726 06:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.726 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.726 06:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.726 06:53:13 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.726 06:53:13 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:59.726 06:53:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.726 06:53:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.726 06:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.726 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.726 06:53:13 -- host/discovery.sh@63 -- # xargs 00:22:59.726 06:53:13 -- host/discovery.sh@63 -- # sort -n 00:22:59.726 06:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.986 06:53:13 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.986 06:53:13 -- host/discovery.sh@121 -- # get_notification_count 00:22:59.986 06:53:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.986 06:53:13 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.986 06:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.986 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.986 06:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.986 06:53:13 -- host/discovery.sh@74 -- # notification_count=0 00:22:59.986 06:53:13 -- host/discovery.sh@75 -- # notify_id=2 00:22:59.986 06:53:13 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:59.986 06:53:13 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.986 06:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.986 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.986 [2024-12-14 06:53:13.812480] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:59.986 [2024-12-14 06:53:13.812538] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.986 [2024-12-14 06:53:13.814306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.986 [2024-12-14 06:53:13.814361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.986 [2024-12-14 06:53:13.814375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.986 [2024-12-14 06:53:13.814405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.986 [2024-12-14 06:53:13.814417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.986 [2024-12-14 06:53:13.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.986 [2024-12-14 06:53:13.814450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.986 [2024-12-14 06:53:13.814459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.986 [2024-12-14 06:53:13.814468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.986 06:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.986 06:53:13 -- host/discovery.sh@127 -- # sleep 1 00:22:59.986 [2024-12-14 06:53:13.824244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.986 [2024-12-14 06:53:13.834271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.986 [2024-12-14 06:53:13.834425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.986 [2024-12-14 06:53:13.834505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.986 [2024-12-14 06:53:13.834521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.986 [2024-12-14 06:53:13.834531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.986 [2024-12-14 06:53:13.834547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.986 [2024-12-14 06:53:13.834560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.986 [2024-12-14 06:53:13.834569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.986 [2024-12-14 06:53:13.834579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.986 [2024-12-14 06:53:13.834634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.986 [2024-12-14 06:53:13.844338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.986 [2024-12-14 06:53:13.844481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.986 [2024-12-14 06:53:13.844525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.986 [2024-12-14 06:53:13.844540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.986 [2024-12-14 06:53:13.844550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.986 [2024-12-14 06:53:13.844565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.986 [2024-12-14 06:53:13.844577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.986 [2024-12-14 06:53:13.844585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.986 [2024-12-14 06:53:13.844594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.844620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.854458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.987 [2024-12-14 06:53:13.854583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.854627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.854658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.987 [2024-12-14 06:53:13.854668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.987 [2024-12-14 06:53:13.854682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.987 [2024-12-14 06:53:13.854745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.987 [2024-12-14 06:53:13.854758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.987 [2024-12-14 06:53:13.854767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.854780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.864538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.987 [2024-12-14 06:53:13.864664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.864715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.864731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.987 [2024-12-14 06:53:13.864740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.987 [2024-12-14 06:53:13.864755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.987 [2024-12-14 06:53:13.864779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.987 [2024-12-14 06:53:13.864789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.987 [2024-12-14 06:53:13.864797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.864810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.874618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.987 [2024-12-14 06:53:13.874706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.874766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.874782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.987 [2024-12-14 06:53:13.874793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.987 [2024-12-14 06:53:13.874808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.987 [2024-12-14 06:53:13.874862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.987 [2024-12-14 06:53:13.874872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.987 [2024-12-14 06:53:13.874884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.874898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.884676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.987 [2024-12-14 06:53:13.884774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.884818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.884834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.987 [2024-12-14 06:53:13.884863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.987 [2024-12-14 06:53:13.884877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.987 [2024-12-14 06:53:13.884900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.987 [2024-12-14 06:53:13.884914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.987 [2024-12-14 06:53:13.884921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.884934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.894741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.987 [2024-12-14 06:53:13.894841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.894883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.987 [2024-12-14 06:53:13.894913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190b9c0 with addr=10.0.0.2, port=4420 00:22:59.987 [2024-12-14 06:53:13.894924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b9c0 is same with the state(5) to be set 00:22:59.987 [2024-12-14 06:53:13.894938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b9c0 (9): Bad file descriptor 00:22:59.987 [2024-12-14 06:53:13.894970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.987 [2024-12-14 06:53:13.894983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.987 [2024-12-14 06:53:13.894991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.987 [2024-12-14 06:53:13.895004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.987 [2024-12-14 06:53:13.898964] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:59.987 [2024-12-14 06:53:13.899001] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:00.926 06:53:14 -- host/discovery.sh@128 -- # get_subsystem_names 00:23:00.926 06:53:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.926 06:53:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.926 06:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.926 06:53:14 -- host/discovery.sh@59 -- # sort 00:23:00.926 06:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:00.926 06:53:14 -- host/discovery.sh@59 -- # xargs 00:23:00.927 06:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.927 06:53:14 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.927 06:53:14 -- host/discovery.sh@129 -- # get_bdev_list 00:23:00.927 06:53:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.927 06:53:14 -- host/discovery.sh@55 -- # sort 00:23:00.927 06:53:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.927 06:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.927 06:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:00.927 06:53:14 -- host/discovery.sh@55 -- # xargs 00:23:00.927 06:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.189 06:53:14 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.189 06:53:14 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:23:01.189 06:53:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.189 06:53:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.189 06:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.189 06:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.189 06:53:14 -- host/discovery.sh@63 -- # sort -n 00:23:01.189 06:53:14 -- host/discovery.sh@63 -- # xargs 00:23:01.189 06:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.189 06:53:14 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:23:01.189 06:53:14 -- host/discovery.sh@131 -- # get_notification_count 00:23:01.189 06:53:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:01.189 06:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.189 06:53:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.189 06:53:14 -- host/discovery.sh@74 -- # jq '. | length' 00:23:01.189 06:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.189 06:53:15 -- host/discovery.sh@74 -- # notification_count=0 00:23:01.189 06:53:15 -- host/discovery.sh@75 -- # notify_id=2 00:23:01.189 06:53:15 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:23:01.189 06:53:15 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:01.189 06:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.189 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.189 06:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.189 06:53:15 -- host/discovery.sh@135 -- # sleep 1 00:23:02.124 06:53:16 -- host/discovery.sh@136 -- # get_subsystem_names 00:23:02.124 06:53:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.124 06:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.124 06:53:16 -- common/autotest_common.sh@10 -- # set +x 00:23:02.124 06:53:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.124 06:53:16 -- host/discovery.sh@59 -- # sort 00:23:02.124 06:53:16 -- host/discovery.sh@59 -- # xargs 00:23:02.124 06:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.383 06:53:16 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:23:02.383 06:53:16 -- host/discovery.sh@137 -- # get_bdev_list 00:23:02.383 06:53:16 -- host/discovery.sh@55 -- # sort 00:23:02.383 06:53:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.383 06:53:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.383 06:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.384 06:53:16 -- host/discovery.sh@55 -- # xargs 00:23:02.384 06:53:16 -- common/autotest_common.sh@10 -- # set +x 00:23:02.384 06:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.384 06:53:16 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:23:02.384 06:53:16 -- host/discovery.sh@138 -- # get_notification_count 00:23:02.384 06:53:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.384 06:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.384 06:53:16 -- common/autotest_common.sh@10 -- # set +x 00:23:02.384 06:53:16 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.384 06:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.384 06:53:16 -- host/discovery.sh@74 -- # notification_count=2 00:23:02.384 06:53:16 -- host/discovery.sh@75 -- # notify_id=4 00:23:02.384 06:53:16 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:23:02.384 06:53:16 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:02.384 06:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.384 06:53:16 -- common/autotest_common.sh@10 -- # set +x 00:23:03.319 [2024-12-14 06:53:17.245778] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.319 [2024-12-14 06:53:17.245819] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.319 [2024-12-14 06:53:17.245839] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.577 [2024-12-14 06:53:17.331955] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:03.577 [2024-12-14 06:53:17.391433] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:03.577 [2024-12-14 06:53:17.391484] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.577 06:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 06:53:17 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@650 -- # local es=0 00:23:03.578 06:53:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.578 06:53:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 2024/12/14 06:53:17 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:23:03.578 request: 00:23:03.578 { 00:23:03.578 "method": "bdev_nvme_start_discovery", 00:23:03.578 "params": { 00:23:03.578 "name": "nvme", 00:23:03.578 "trtype": "tcp", 00:23:03.578 "traddr": "10.0.0.2", 00:23:03.578 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.578 "adrfam": "ipv4", 00:23:03.578 "trsvcid": "8009", 00:23:03.578 "wait_for_attach": true 00:23:03.578 } 00:23:03.578 } 00:23:03.578 Got JSON-RPC error response 00:23:03.578 GoRPCClient: error on JSON-RPC call 00:23:03.578 06:53:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.578 06:53:17 -- common/autotest_common.sh@653 -- # es=1 00:23:03.578 06:53:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.578 06:53:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.578 06:53:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.578 06:53:17 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.578 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # sort 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # xargs 00:23:03.578 06:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 06:53:17 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:23:03.578 06:53:17 -- host/discovery.sh@147 -- # get_bdev_list 00:23:03.578 06:53:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.578 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 06:53:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.578 06:53:17 -- host/discovery.sh@55 -- # sort 00:23:03.578 06:53:17 -- host/discovery.sh@55 -- # xargs 00:23:03.578 06:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 06:53:17 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.578 06:53:17 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@650 -- # local es=0 00:23:03.578 06:53:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.578 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.578 06:53:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.578 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 2024/12/14 06:53:17 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:23:03.578 request: 00:23:03.578 { 00:23:03.578 "method": "bdev_nvme_start_discovery", 00:23:03.578 "params": { 00:23:03.578 "name": "nvme_second", 00:23:03.578 "trtype": "tcp", 00:23:03.578 "traddr": "10.0.0.2", 00:23:03.578 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.578 "adrfam": "ipv4", 00:23:03.578 "trsvcid": "8009", 00:23:03.578 "wait_for_attach": true 00:23:03.578 } 00:23:03.578 } 00:23:03.578 Got JSON-RPC error response 00:23:03.578 GoRPCClient: error on JSON-RPC call 00:23:03.578 06:53:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.578 06:53:17 -- common/autotest_common.sh@653 -- # es=1 00:23:03.578 06:53:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.578 06:53:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.578 06:53:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.578 06:53:17 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.578 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.578 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # sort 00:23:03.578 06:53:17 -- host/discovery.sh@67 -- # xargs 00:23:03.578 06:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.836 06:53:17 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:23:03.836 06:53:17 -- host/discovery.sh@153 -- # get_bdev_list 00:23:03.836 06:53:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.836 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.836 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.836 06:53:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.836 06:53:17 -- host/discovery.sh@55 -- # xargs 00:23:03.836 06:53:17 -- host/discovery.sh@55 -- # sort 00:23:03.836 06:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.836 06:53:17 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.837 06:53:17 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.837 06:53:17 -- common/autotest_common.sh@650 -- # local es=0 00:23:03.837 06:53:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.837 06:53:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.837 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.837 06:53:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.837 06:53:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.837 06:53:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.837 06:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.837 06:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:04.773 [2024-12-14 06:53:18.657221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.773 [2024-12-14 06:53:18.657319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.773 [2024-12-14 06:53:18.657338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1907970 with addr=10.0.0.2, port=8010 00:23:04.773 [2024-12-14 06:53:18.657360] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:04.773 [2024-12-14 06:53:18.657403] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:04.773 [2024-12-14 06:53:18.657426] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:05.709 [2024-12-14 06:53:19.657226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.709 [2024-12-14 06:53:19.657305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.709 [2024-12-14 06:53:19.657322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1907970 with addr=10.0.0.2, port=8010 00:23:05.709 [2024-12-14 06:53:19.657340] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:05.709 [2024-12-14 06:53:19.657348] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:05.709 [2024-12-14 06:53:19.657356] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:07.087 [2024-12-14 06:53:20.657081] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:07.087 2024/12/14 06:53:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:23:07.087 request: 00:23:07.087 { 00:23:07.087 "method": "bdev_nvme_start_discovery", 00:23:07.087 "params": { 00:23:07.087 "name": "nvme_second", 00:23:07.087 "trtype": "tcp", 00:23:07.087 "traddr": "10.0.0.2", 00:23:07.087 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.087 "adrfam": "ipv4", 00:23:07.087 "trsvcid": "8010", 00:23:07.087 "attach_timeout_ms": 3000 00:23:07.087 } 00:23:07.087 } 00:23:07.087 Got JSON-RPC error response 00:23:07.087 GoRPCClient: error on JSON-RPC call 00:23:07.087 06:53:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.087 06:53:20 -- common/autotest_common.sh@653 -- # es=1 00:23:07.087 06:53:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.087 06:53:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.087 06:53:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.087 06:53:20 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:23:07.087 06:53:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.087 06:53:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.087 06:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.087 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.087 06:53:20 -- host/discovery.sh@67 -- # sort 00:23:07.087 06:53:20 -- host/discovery.sh@67 -- # xargs 00:23:07.087 06:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.087 06:53:20 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:23:07.087 06:53:20 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:23:07.087 06:53:20 -- host/discovery.sh@162 -- # kill 85830 00:23:07.087 06:53:20 -- host/discovery.sh@163 -- # nvmftestfini 00:23:07.087 06:53:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:07.087 06:53:20 -- nvmf/common.sh@116 -- # sync 00:23:07.087 06:53:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:07.087 06:53:20 -- nvmf/common.sh@119 -- # set +e 00:23:07.087 06:53:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:07.087 06:53:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:07.087 rmmod nvme_tcp 00:23:07.087 rmmod nvme_fabrics 00:23:07.087 rmmod nvme_keyring 00:23:07.087 06:53:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:07.087 06:53:20 -- nvmf/common.sh@123 -- # set -e 00:23:07.087 06:53:20 -- nvmf/common.sh@124 -- # return 0 00:23:07.087 06:53:20 -- nvmf/common.sh@477 -- # '[' -n 85784 ']' 00:23:07.087 06:53:20 -- nvmf/common.sh@478 -- # killprocess 85784 00:23:07.087 06:53:20 -- common/autotest_common.sh@936 -- # '[' -z 85784 ']' 00:23:07.087 06:53:20 -- common/autotest_common.sh@940 -- # kill -0 85784 00:23:07.087 06:53:20 -- common/autotest_common.sh@941 -- # uname 00:23:07.087 06:53:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.087 06:53:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85784 00:23:07.087 06:53:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.088 killing process with pid 85784 00:23:07.088 06:53:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.088 06:53:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85784' 00:23:07.088 06:53:20 -- common/autotest_common.sh@955 -- # kill 85784 00:23:07.088 06:53:20 -- common/autotest_common.sh@960 -- # wait 85784 00:23:07.347 06:53:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:07.347 06:53:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:07.347 06:53:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:07.347 06:53:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.347 06:53:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:07.347 06:53:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.347 06:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.347 06:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.347 06:53:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:07.347 00:23:07.347 real 0m14.203s 00:23:07.347 user 0m27.670s 00:23:07.347 sys 0m1.820s 00:23:07.347 06:53:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:07.347 ************************************ 00:23:07.347 END TEST nvmf_discovery 00:23:07.347 ************************************ 00:23:07.347 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.347 06:53:21 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:07.347 06:53:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:07.347 06:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.347 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.347 ************************************ 00:23:07.347 START TEST nvmf_discovery_remove_ifc 00:23:07.347 ************************************ 00:23:07.347 06:53:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:07.607 * Looking for test storage... 00:23:07.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:07.607 06:53:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:07.607 06:53:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:07.607 06:53:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:07.607 06:53:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:07.607 06:53:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:07.607 06:53:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:07.607 06:53:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:07.607 06:53:21 -- scripts/common.sh@335 -- # IFS=.-: 00:23:07.607 06:53:21 -- scripts/common.sh@335 -- # read -ra ver1 00:23:07.607 06:53:21 -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.607 06:53:21 -- scripts/common.sh@336 -- # read -ra ver2 00:23:07.607 06:53:21 -- scripts/common.sh@337 -- # local 'op=<' 00:23:07.607 06:53:21 -- scripts/common.sh@339 -- # ver1_l=2 00:23:07.607 06:53:21 -- scripts/common.sh@340 -- # ver2_l=1 00:23:07.607 06:53:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:07.607 06:53:21 -- scripts/common.sh@343 -- # case "$op" in 00:23:07.607 06:53:21 -- scripts/common.sh@344 -- # : 1 00:23:07.607 06:53:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:07.607 06:53:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.607 06:53:21 -- scripts/common.sh@364 -- # decimal 1 00:23:07.607 06:53:21 -- scripts/common.sh@352 -- # local d=1 00:23:07.607 06:53:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.607 06:53:21 -- scripts/common.sh@354 -- # echo 1 00:23:07.607 06:53:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:07.607 06:53:21 -- scripts/common.sh@365 -- # decimal 2 00:23:07.607 06:53:21 -- scripts/common.sh@352 -- # local d=2 00:23:07.607 06:53:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.607 06:53:21 -- scripts/common.sh@354 -- # echo 2 00:23:07.607 06:53:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:07.607 06:53:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:07.607 06:53:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:07.607 06:53:21 -- scripts/common.sh@367 -- # return 0 00:23:07.607 06:53:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.607 06:53:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.607 --rc genhtml_branch_coverage=1 00:23:07.607 --rc genhtml_function_coverage=1 00:23:07.607 --rc genhtml_legend=1 00:23:07.607 --rc geninfo_all_blocks=1 00:23:07.607 --rc geninfo_unexecuted_blocks=1 00:23:07.607 00:23:07.607 ' 00:23:07.607 06:53:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.607 --rc genhtml_branch_coverage=1 00:23:07.607 --rc genhtml_function_coverage=1 00:23:07.607 --rc genhtml_legend=1 00:23:07.607 --rc geninfo_all_blocks=1 00:23:07.607 --rc geninfo_unexecuted_blocks=1 00:23:07.607 00:23:07.607 ' 00:23:07.607 06:53:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.607 --rc genhtml_branch_coverage=1 00:23:07.607 --rc genhtml_function_coverage=1 00:23:07.607 --rc genhtml_legend=1 00:23:07.607 --rc geninfo_all_blocks=1 00:23:07.607 --rc geninfo_unexecuted_blocks=1 00:23:07.607 00:23:07.607 ' 00:23:07.607 06:53:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:07.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.607 --rc genhtml_branch_coverage=1 00:23:07.607 --rc genhtml_function_coverage=1 00:23:07.607 --rc genhtml_legend=1 00:23:07.607 --rc geninfo_all_blocks=1 00:23:07.607 --rc geninfo_unexecuted_blocks=1 00:23:07.607 00:23:07.607 ' 00:23:07.607 06:53:21 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.607 06:53:21 -- nvmf/common.sh@7 -- # uname -s 00:23:07.607 06:53:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.607 06:53:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.607 06:53:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.607 06:53:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.607 06:53:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.607 06:53:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.607 06:53:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.607 06:53:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.607 06:53:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.607 06:53:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.607 06:53:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:23:07.607 06:53:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:23:07.607 06:53:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.607 06:53:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.607 06:53:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.607 06:53:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.607 06:53:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.607 06:53:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.607 06:53:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.607 06:53:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.607 06:53:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.607 06:53:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.607 06:53:21 -- paths/export.sh@5 -- # export PATH 00:23:07.607 06:53:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.607 06:53:21 -- nvmf/common.sh@46 -- # : 0 00:23:07.607 06:53:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.607 06:53:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.608 06:53:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.608 06:53:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.608 06:53:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.608 06:53:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.608 06:53:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.608 06:53:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:07.608 06:53:21 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:07.608 06:53:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:07.608 06:53:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.608 06:53:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.608 06:53:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.608 06:53:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.608 06:53:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.608 06:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.608 06:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.608 06:53:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:07.608 06:53:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:07.608 06:53:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:07.608 06:53:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:07.608 06:53:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:07.608 06:53:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:07.608 06:53:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.608 06:53:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.608 06:53:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:07.608 06:53:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:07.608 06:53:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.608 06:53:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.608 06:53:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.608 06:53:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.608 06:53:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.608 06:53:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.608 06:53:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.608 06:53:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.608 06:53:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:07.608 06:53:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:07.608 Cannot find device "nvmf_tgt_br" 00:23:07.608 06:53:21 -- nvmf/common.sh@154 -- # true 00:23:07.608 06:53:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.608 Cannot find device "nvmf_tgt_br2" 00:23:07.608 06:53:21 -- nvmf/common.sh@155 -- # true 00:23:07.608 06:53:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:07.608 06:53:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:07.608 Cannot find device "nvmf_tgt_br" 00:23:07.608 06:53:21 -- nvmf/common.sh@157 -- # true 00:23:07.608 06:53:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:07.608 Cannot find device "nvmf_tgt_br2" 00:23:07.608 06:53:21 -- nvmf/common.sh@158 -- # true 00:23:07.608 06:53:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:07.868 06:53:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:07.868 06:53:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.868 06:53:21 -- nvmf/common.sh@161 -- # true 00:23:07.868 06:53:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.868 06:53:21 -- nvmf/common.sh@162 -- # true 00:23:07.868 06:53:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.868 06:53:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.868 06:53:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.868 06:53:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.868 06:53:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.868 06:53:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.868 06:53:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.868 06:53:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.868 06:53:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.868 06:53:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:07.868 06:53:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:07.868 06:53:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:07.868 06:53:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:07.868 06:53:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.868 06:53:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.868 06:53:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.868 06:53:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:07.868 06:53:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:07.868 06:53:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.868 06:53:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.868 06:53:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.868 06:53:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.868 06:53:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.868 06:53:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:07.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:23:07.868 00:23:07.868 --- 10.0.0.2 ping statistics --- 00:23:07.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.868 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:07.868 06:53:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:07.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:07.868 00:23:07.868 --- 10.0.0.3 ping statistics --- 00:23:07.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.868 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:07.868 06:53:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:23:07.868 00:23:07.868 --- 10.0.0.1 ping statistics --- 00:23:07.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.868 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:23:07.868 06:53:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.868 06:53:21 -- nvmf/common.sh@421 -- # return 0 00:23:07.868 06:53:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:07.868 06:53:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.868 06:53:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:07.868 06:53:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:07.868 06:53:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.868 06:53:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:07.868 06:53:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:08.127 06:53:21 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:08.127 06:53:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:08.127 06:53:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.127 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:08.127 06:53:21 -- nvmf/common.sh@469 -- # nvmfpid=86342 00:23:08.127 06:53:21 -- nvmf/common.sh@470 -- # waitforlisten 86342 00:23:08.127 06:53:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.127 06:53:21 -- common/autotest_common.sh@829 -- # '[' -z 86342 ']' 00:23:08.127 06:53:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.127 06:53:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.127 06:53:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.127 06:53:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.127 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:23:08.127 [2024-12-14 06:53:21.933473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.127 [2024-12-14 06:53:21.933572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.127 [2024-12-14 06:53:22.063797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.385 [2024-12-14 06:53:22.161233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:08.385 [2024-12-14 06:53:22.161424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.385 [2024-12-14 06:53:22.161436] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.385 [2024-12-14 06:53:22.161444] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.385 [2024-12-14 06:53:22.161477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.953 06:53:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.953 06:53:22 -- common/autotest_common.sh@862 -- # return 0 00:23:08.953 06:53:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:08.953 06:53:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.953 06:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:09.211 06:53:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.211 06:53:22 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:09.211 06:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.211 06:53:22 -- common/autotest_common.sh@10 -- # set +x 00:23:09.211 [2024-12-14 06:53:22.974063] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.211 [2024-12-14 06:53:22.982246] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:09.211 null0 00:23:09.211 [2024-12-14 06:53:23.014090] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.212 06:53:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.212 06:53:23 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86392 00:23:09.212 06:53:23 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:09.212 06:53:23 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86392 /tmp/host.sock 00:23:09.212 06:53:23 -- common/autotest_common.sh@829 -- # '[' -z 86392 ']' 00:23:09.212 06:53:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:09.212 06:53:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.212 06:53:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:09.212 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:09.212 06:53:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.212 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.212 [2024-12-14 06:53:23.100521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:09.212 [2024-12-14 06:53:23.100628] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86392 ] 00:23:09.471 [2024-12-14 06:53:23.242338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.471 [2024-12-14 06:53:23.357015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:09.471 [2024-12-14 06:53:23.357238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.408 06:53:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.408 06:53:24 -- common/autotest_common.sh@862 -- # return 0 00:23:10.408 06:53:24 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.408 06:53:24 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:10.408 06:53:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.408 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.408 06:53:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.408 06:53:24 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:10.408 06:53:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.408 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.408 06:53:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.408 06:53:24 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:10.408 06:53:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.408 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:23:11.341 [2024-12-14 06:53:25.197287] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:11.341 [2024-12-14 06:53:25.197352] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:11.341 [2024-12-14 06:53:25.197371] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.341 [2024-12-14 06:53:25.283421] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:11.600 [2024-12-14 06:53:25.339458] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:11.600 [2024-12-14 06:53:25.339527] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:11.600 [2024-12-14 06:53:25.339557] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:11.600 [2024-12-14 06:53:25.339573] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:11.600 [2024-12-14 06:53:25.339599] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:11.600 06:53:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:11.600 [2024-12-14 06:53:25.345782] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcad840 was disconnected and freed. delete nvme_qpair. 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.600 06:53:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:11.600 06:53:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:11.600 06:53:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:11.600 06:53:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:11.600 06:53:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.600 06:53:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:11.600 06:53:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:12.536 06:53:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.536 06:53:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.536 06:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.536 06:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:12.536 06:53:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.536 06:53:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.536 06:53:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.536 06:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.795 06:53:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:12.795 06:53:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.731 06:53:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.731 06:53:27 -- common/autotest_common.sh@10 -- # set +x 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.731 06:53:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.731 06:53:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.666 06:53:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.666 06:53:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.666 06:53:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:14.666 06:53:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.076 06:53:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.076 06:53:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.076 06:53:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.076 06:53:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.022 06:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.022 06:53:30 -- common/autotest_common.sh@10 -- # set +x 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.022 06:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.022 [2024-12-14 06:53:30.767080] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:17.022 [2024-12-14 06:53:30.767183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.022 [2024-12-14 06:53:30.767199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.022 [2024-12-14 06:53:30.767211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.022 [2024-12-14 06:53:30.767220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.022 [2024-12-14 06:53:30.767230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.022 [2024-12-14 06:53:30.767239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.022 [2024-12-14 06:53:30.767248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.022 [2024-12-14 06:53:30.767256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.022 [2024-12-14 06:53:30.767266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.022 [2024-12-14 06:53:30.767274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.022 [2024-12-14 06:53:30.767283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc249f0 is same with the state(5) to be set 00:23:17.022 [2024-12-14 06:53:30.777075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc249f0 (9): Bad file descriptor 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:17.022 06:53:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.022 [2024-12-14 06:53:30.787097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:17.957 06:53:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.957 06:53:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.957 06:53:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.957 06:53:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.957 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.957 06:53:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.957 06:53:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.957 [2024-12-14 06:53:31.826074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:18.895 [2024-12-14 06:53:32.850077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:18.895 [2024-12-14 06:53:32.850201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc249f0 with addr=10.0.0.2, port=4420 00:23:18.895 [2024-12-14 06:53:32.850240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc249f0 is same with the state(5) to be set 00:23:18.895 [2024-12-14 06:53:32.850299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.895 [2024-12-14 06:53:32.850323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.895 [2024-12-14 06:53:32.850343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.895 [2024-12-14 06:53:32.850366] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:18.895 [2024-12-14 06:53:32.851213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc249f0 (9): Bad file descriptor 00:23:18.895 [2024-12-14 06:53:32.851293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.895 [2024-12-14 06:53:32.851350] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:18.895 [2024-12-14 06:53:32.851428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.895 [2024-12-14 06:53:32.851458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.895 [2024-12-14 06:53:32.851486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.895 [2024-12-14 06:53:32.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.895 [2024-12-14 06:53:32.851529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.895 [2024-12-14 06:53:32.851549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.895 [2024-12-14 06:53:32.851571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.895 [2024-12-14 06:53:32.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.895 [2024-12-14 06:53:32.851624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.895 [2024-12-14 06:53:32.851649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.895 [2024-12-14 06:53:32.851669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:18.895 [2024-12-14 06:53:32.851700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc24e00 (9): Bad file descriptor 00:23:18.895 [2024-12-14 06:53:32.852322] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:18.895 [2024-12-14 06:53:32.852363] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:18.895 06:53:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.895 06:53:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.895 06:53:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.273 06:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.273 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.273 06:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.273 06:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.273 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.273 06:53:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.273 06:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.273 06:53:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:20.273 06:53:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.214 [2024-12-14 06:53:34.858684] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:21.214 [2024-12-14 06:53:34.858712] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:21.215 [2024-12-14 06:53:34.858745] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.215 [2024-12-14 06:53:34.944778] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:21.215 [2024-12-14 06:53:34.999847] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:21.215 [2024-12-14 06:53:34.999910] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:21.215 [2024-12-14 06:53:34.999933] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:21.215 [2024-12-14 06:53:34.999948] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:21.215 [2024-12-14 06:53:34.999956] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.215 [2024-12-14 06:53:35.007437] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc68080 was disconnected and freed. delete nvme_qpair. 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.215 06:53:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.215 06:53:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.215 06:53:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:21.215 06:53:35 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86392 00:23:21.215 06:53:35 -- common/autotest_common.sh@936 -- # '[' -z 86392 ']' 00:23:21.215 06:53:35 -- common/autotest_common.sh@940 -- # kill -0 86392 00:23:21.215 06:53:35 -- common/autotest_common.sh@941 -- # uname 00:23:21.215 06:53:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.215 06:53:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86392 00:23:21.215 06:53:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:21.215 06:53:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:21.215 killing process with pid 86392 00:23:21.215 06:53:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86392' 00:23:21.215 06:53:35 -- common/autotest_common.sh@955 -- # kill 86392 00:23:21.215 06:53:35 -- common/autotest_common.sh@960 -- # wait 86392 00:23:21.477 06:53:35 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:21.477 06:53:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:21.477 06:53:35 -- nvmf/common.sh@116 -- # sync 00:23:21.477 06:53:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:21.477 06:53:35 -- nvmf/common.sh@119 -- # set +e 00:23:21.477 06:53:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:21.477 06:53:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:21.477 rmmod nvme_tcp 00:23:21.477 rmmod nvme_fabrics 00:23:21.735 rmmod nvme_keyring 00:23:21.735 06:53:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:21.735 06:53:35 -- nvmf/common.sh@123 -- # set -e 00:23:21.735 06:53:35 -- nvmf/common.sh@124 -- # return 0 00:23:21.735 06:53:35 -- nvmf/common.sh@477 -- # '[' -n 86342 ']' 00:23:21.735 06:53:35 -- nvmf/common.sh@478 -- # killprocess 86342 00:23:21.735 06:53:35 -- common/autotest_common.sh@936 -- # '[' -z 86342 ']' 00:23:21.735 06:53:35 -- common/autotest_common.sh@940 -- # kill -0 86342 00:23:21.735 06:53:35 -- common/autotest_common.sh@941 -- # uname 00:23:21.735 06:53:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.735 06:53:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86342 00:23:21.735 06:53:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:21.735 06:53:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:21.735 killing process with pid 86342 00:23:21.735 06:53:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86342' 00:23:21.735 06:53:35 -- common/autotest_common.sh@955 -- # kill 86342 00:23:21.735 06:53:35 -- common/autotest_common.sh@960 -- # wait 86342 00:23:21.994 06:53:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:21.994 06:53:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:21.994 06:53:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:21.994 06:53:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.994 06:53:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:21.994 06:53:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.994 06:53:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.994 06:53:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.994 06:53:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:21.994 00:23:21.994 real 0m14.592s 00:23:21.994 user 0m24.831s 00:23:21.994 sys 0m1.691s 00:23:21.994 06:53:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:21.994 ************************************ 00:23:21.994 END TEST nvmf_discovery_remove_ifc 00:23:21.994 ************************************ 00:23:21.994 06:53:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.994 06:53:35 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:23:21.994 06:53:35 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:21.994 06:53:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:21.994 06:53:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.994 06:53:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.994 ************************************ 00:23:21.994 START TEST nvmf_digest 00:23:21.994 ************************************ 00:23:21.994 06:53:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:22.253 * Looking for test storage... 00:23:22.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:22.253 06:53:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:22.253 06:53:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:22.253 06:53:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:22.253 06:53:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:22.253 06:53:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:22.253 06:53:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:22.253 06:53:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:22.253 06:53:36 -- scripts/common.sh@335 -- # IFS=.-: 00:23:22.253 06:53:36 -- scripts/common.sh@335 -- # read -ra ver1 00:23:22.253 06:53:36 -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.253 06:53:36 -- scripts/common.sh@336 -- # read -ra ver2 00:23:22.253 06:53:36 -- scripts/common.sh@337 -- # local 'op=<' 00:23:22.253 06:53:36 -- scripts/common.sh@339 -- # ver1_l=2 00:23:22.253 06:53:36 -- scripts/common.sh@340 -- # ver2_l=1 00:23:22.253 06:53:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:22.253 06:53:36 -- scripts/common.sh@343 -- # case "$op" in 00:23:22.253 06:53:36 -- scripts/common.sh@344 -- # : 1 00:23:22.253 06:53:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:22.253 06:53:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.253 06:53:36 -- scripts/common.sh@364 -- # decimal 1 00:23:22.253 06:53:36 -- scripts/common.sh@352 -- # local d=1 00:23:22.254 06:53:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.254 06:53:36 -- scripts/common.sh@354 -- # echo 1 00:23:22.254 06:53:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:22.254 06:53:36 -- scripts/common.sh@365 -- # decimal 2 00:23:22.254 06:53:36 -- scripts/common.sh@352 -- # local d=2 00:23:22.254 06:53:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.254 06:53:36 -- scripts/common.sh@354 -- # echo 2 00:23:22.254 06:53:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:22.254 06:53:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:22.254 06:53:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:22.254 06:53:36 -- scripts/common.sh@367 -- # return 0 00:23:22.254 06:53:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.254 06:53:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:22.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.254 --rc genhtml_branch_coverage=1 00:23:22.254 --rc genhtml_function_coverage=1 00:23:22.254 --rc genhtml_legend=1 00:23:22.254 --rc geninfo_all_blocks=1 00:23:22.254 --rc geninfo_unexecuted_blocks=1 00:23:22.254 00:23:22.254 ' 00:23:22.254 06:53:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:22.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.254 --rc genhtml_branch_coverage=1 00:23:22.254 --rc genhtml_function_coverage=1 00:23:22.254 --rc genhtml_legend=1 00:23:22.254 --rc geninfo_all_blocks=1 00:23:22.254 --rc geninfo_unexecuted_blocks=1 00:23:22.254 00:23:22.254 ' 00:23:22.254 06:53:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:22.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.254 --rc genhtml_branch_coverage=1 00:23:22.254 --rc genhtml_function_coverage=1 00:23:22.254 --rc genhtml_legend=1 00:23:22.254 --rc geninfo_all_blocks=1 00:23:22.254 --rc geninfo_unexecuted_blocks=1 00:23:22.254 00:23:22.254 ' 00:23:22.254 06:53:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:22.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.254 --rc genhtml_branch_coverage=1 00:23:22.254 --rc genhtml_function_coverage=1 00:23:22.254 --rc genhtml_legend=1 00:23:22.254 --rc geninfo_all_blocks=1 00:23:22.254 --rc geninfo_unexecuted_blocks=1 00:23:22.254 00:23:22.254 ' 00:23:22.254 06:53:36 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:22.254 06:53:36 -- nvmf/common.sh@7 -- # uname -s 00:23:22.254 06:53:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.254 06:53:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.254 06:53:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.254 06:53:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.254 06:53:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.254 06:53:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.254 06:53:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.254 06:53:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.254 06:53:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.254 06:53:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:23:22.254 06:53:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:23:22.254 06:53:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.254 06:53:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.254 06:53:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:22.254 06:53:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:22.254 06:53:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.254 06:53:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.254 06:53:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.254 06:53:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.254 06:53:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.254 06:53:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.254 06:53:36 -- paths/export.sh@5 -- # export PATH 00:23:22.254 06:53:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.254 06:53:36 -- nvmf/common.sh@46 -- # : 0 00:23:22.254 06:53:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:22.254 06:53:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:22.254 06:53:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:22.254 06:53:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.254 06:53:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.254 06:53:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:22.254 06:53:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:22.254 06:53:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:22.254 06:53:36 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:22.254 06:53:36 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:22.254 06:53:36 -- host/digest.sh@16 -- # runtime=2 00:23:22.254 06:53:36 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:23:22.254 06:53:36 -- host/digest.sh@132 -- # nvmftestinit 00:23:22.254 06:53:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:22.254 06:53:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.254 06:53:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:22.254 06:53:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:22.254 06:53:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:22.254 06:53:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.254 06:53:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.254 06:53:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.254 06:53:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:22.254 06:53:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:22.254 06:53:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.254 06:53:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.254 06:53:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:22.254 06:53:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:22.254 06:53:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.254 06:53:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.254 06:53:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.254 06:53:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.254 06:53:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.254 06:53:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.254 06:53:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.254 06:53:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.254 06:53:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:22.254 06:53:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:22.254 Cannot find device "nvmf_tgt_br" 00:23:22.254 06:53:36 -- nvmf/common.sh@154 -- # true 00:23:22.254 06:53:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.254 Cannot find device "nvmf_tgt_br2" 00:23:22.254 06:53:36 -- nvmf/common.sh@155 -- # true 00:23:22.254 06:53:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:22.254 06:53:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:22.254 Cannot find device "nvmf_tgt_br" 00:23:22.254 06:53:36 -- nvmf/common.sh@157 -- # true 00:23:22.254 06:53:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:22.513 Cannot find device "nvmf_tgt_br2" 00:23:22.513 06:53:36 -- nvmf/common.sh@158 -- # true 00:23:22.513 06:53:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:22.513 06:53:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:22.513 06:53:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.513 06:53:36 -- nvmf/common.sh@161 -- # true 00:23:22.513 06:53:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.513 06:53:36 -- nvmf/common.sh@162 -- # true 00:23:22.513 06:53:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.513 06:53:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.513 06:53:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.513 06:53:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.513 06:53:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.513 06:53:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.513 06:53:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.513 06:53:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:22.513 06:53:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:22.513 06:53:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:22.513 06:53:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:22.513 06:53:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:22.513 06:53:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:22.513 06:53:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.513 06:53:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.513 06:53:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.513 06:53:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:22.513 06:53:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:22.513 06:53:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.513 06:53:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.513 06:53:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.513 06:53:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.513 06:53:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.513 06:53:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:22.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:22.513 00:23:22.513 --- 10.0.0.2 ping statistics --- 00:23:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.513 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:22.772 06:53:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:22.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:22.772 00:23:22.772 --- 10.0.0.3 ping statistics --- 00:23:22.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.772 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:22.772 06:53:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:22.772 00:23:22.772 --- 10.0.0.1 ping statistics --- 00:23:22.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.772 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:22.772 06:53:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.772 06:53:36 -- nvmf/common.sh@421 -- # return 0 00:23:22.772 06:53:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:22.772 06:53:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.772 06:53:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:22.772 06:53:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:22.772 06:53:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.772 06:53:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:22.772 06:53:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:22.772 06:53:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:22.772 06:53:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:23:22.772 06:53:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:22.772 06:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:22.772 06:53:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.772 ************************************ 00:23:22.772 START TEST nvmf_digest_clean 00:23:22.772 ************************************ 00:23:22.772 06:53:36 -- common/autotest_common.sh@1114 -- # run_digest 00:23:22.772 06:53:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:23:22.772 06:53:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:22.772 06:53:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.772 06:53:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.772 06:53:36 -- nvmf/common.sh@469 -- # nvmfpid=86820 00:23:22.772 06:53:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:22.772 06:53:36 -- nvmf/common.sh@470 -- # waitforlisten 86820 00:23:22.772 06:53:36 -- common/autotest_common.sh@829 -- # '[' -z 86820 ']' 00:23:22.772 06:53:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.772 06:53:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.772 06:53:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.772 06:53:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.772 06:53:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.772 [2024-12-14 06:53:36.609318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:22.772 [2024-12-14 06:53:36.609585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.772 [2024-12-14 06:53:36.748785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.031 [2024-12-14 06:53:36.877895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:23.031 [2024-12-14 06:53:36.878091] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.031 [2024-12-14 06:53:36.878117] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.031 [2024-12-14 06:53:36.878149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.031 [2024-12-14 06:53:36.878187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.966 06:53:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.966 06:53:37 -- common/autotest_common.sh@862 -- # return 0 00:23:23.966 06:53:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:23.966 06:53:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.966 06:53:37 -- common/autotest_common.sh@10 -- # set +x 00:23:23.966 06:53:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.966 06:53:37 -- host/digest.sh@120 -- # common_target_config 00:23:23.966 06:53:37 -- host/digest.sh@43 -- # rpc_cmd 00:23:23.966 06:53:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.966 06:53:37 -- common/autotest_common.sh@10 -- # set +x 00:23:23.966 null0 00:23:23.966 [2024-12-14 06:53:37.823342] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.966 [2024-12-14 06:53:37.847481] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.966 06:53:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.966 06:53:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:23:23.966 06:53:37 -- host/digest.sh@77 -- # local rw bs qd 00:23:23.966 06:53:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:23.966 06:53:37 -- host/digest.sh@80 -- # rw=randread 00:23:23.966 06:53:37 -- host/digest.sh@80 -- # bs=4096 00:23:23.966 06:53:37 -- host/digest.sh@80 -- # qd=128 00:23:23.966 06:53:37 -- host/digest.sh@82 -- # bperfpid=86870 00:23:23.966 06:53:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:23.966 06:53:37 -- host/digest.sh@83 -- # waitforlisten 86870 /var/tmp/bperf.sock 00:23:23.966 06:53:37 -- common/autotest_common.sh@829 -- # '[' -z 86870 ']' 00:23:23.966 06:53:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:23.966 06:53:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:23.966 06:53:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:23.966 06:53:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.966 06:53:37 -- common/autotest_common.sh@10 -- # set +x 00:23:23.966 [2024-12-14 06:53:37.903791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:23.966 [2024-12-14 06:53:37.903890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86870 ] 00:23:24.225 [2024-12-14 06:53:38.037720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.225 [2024-12-14 06:53:38.179082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.160 06:53:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.160 06:53:38 -- common/autotest_common.sh@862 -- # return 0 00:23:25.160 06:53:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:25.160 06:53:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:25.160 06:53:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:25.418 06:53:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.419 06:53:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.677 nvme0n1 00:23:25.677 06:53:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:25.677 06:53:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:25.677 Running I/O for 2 seconds... 00:23:28.210 00:23:28.210 Latency(us) 00:23:28.210 [2024-12-14T06:53:42.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.210 [2024-12-14T06:53:42.202Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:28.210 nvme0n1 : 2.00 19972.62 78.02 0.00 0.00 6403.42 2710.81 15073.28 00:23:28.210 [2024-12-14T06:53:42.202Z] =================================================================================================================== 00:23:28.210 [2024-12-14T06:53:42.202Z] Total : 19972.62 78.02 0.00 0.00 6403.42 2710.81 15073.28 00:23:28.210 0 00:23:28.210 06:53:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:28.210 06:53:41 -- host/digest.sh@92 -- # get_accel_stats 00:23:28.210 06:53:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:28.210 06:53:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:28.210 06:53:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:28.210 | select(.opcode=="crc32c") 00:23:28.210 | "\(.module_name) \(.executed)"' 00:23:28.210 06:53:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:28.210 06:53:41 -- host/digest.sh@93 -- # exp_module=software 00:23:28.210 06:53:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:28.210 06:53:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:28.210 06:53:41 -- host/digest.sh@97 -- # killprocess 86870 00:23:28.210 06:53:41 -- common/autotest_common.sh@936 -- # '[' -z 86870 ']' 00:23:28.210 06:53:41 -- common/autotest_common.sh@940 -- # kill -0 86870 00:23:28.210 06:53:41 -- common/autotest_common.sh@941 -- # uname 00:23:28.210 06:53:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:28.210 06:53:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86870 00:23:28.210 06:53:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:28.210 06:53:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:28.210 killing process with pid 86870 00:23:28.210 06:53:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86870' 00:23:28.210 Received shutdown signal, test time was about 2.000000 seconds 00:23:28.210 00:23:28.210 Latency(us) 00:23:28.210 [2024-12-14T06:53:42.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.210 [2024-12-14T06:53:42.202Z] =================================================================================================================== 00:23:28.210 [2024-12-14T06:53:42.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.210 06:53:41 -- common/autotest_common.sh@955 -- # kill 86870 00:23:28.210 06:53:41 -- common/autotest_common.sh@960 -- # wait 86870 00:23:28.469 06:53:42 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:23:28.469 06:53:42 -- host/digest.sh@77 -- # local rw bs qd 00:23:28.469 06:53:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:28.469 06:53:42 -- host/digest.sh@80 -- # rw=randread 00:23:28.469 06:53:42 -- host/digest.sh@80 -- # bs=131072 00:23:28.469 06:53:42 -- host/digest.sh@80 -- # qd=16 00:23:28.469 06:53:42 -- host/digest.sh@82 -- # bperfpid=86960 00:23:28.469 06:53:42 -- host/digest.sh@83 -- # waitforlisten 86960 /var/tmp/bperf.sock 00:23:28.469 06:53:42 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:28.469 06:53:42 -- common/autotest_common.sh@829 -- # '[' -z 86960 ']' 00:23:28.469 06:53:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:28.469 06:53:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:28.469 06:53:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:28.469 06:53:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.469 06:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.469 [2024-12-14 06:53:42.399983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:28.469 [2024-12-14 06:53:42.400104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86960 ] 00:23:28.469 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:28.469 Zero copy mechanism will not be used. 00:23:28.739 [2024-12-14 06:53:42.539008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.739 [2024-12-14 06:53:42.651346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.693 06:53:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.693 06:53:43 -- common/autotest_common.sh@862 -- # return 0 00:23:29.693 06:53:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:29.693 06:53:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:29.693 06:53:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:29.951 06:53:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:29.951 06:53:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.210 nvme0n1 00:23:30.210 06:53:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:30.210 06:53:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:30.210 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:30.210 Zero copy mechanism will not be used. 00:23:30.210 Running I/O for 2 seconds... 00:23:32.741 00:23:32.741 Latency(us) 00:23:32.741 [2024-12-14T06:53:46.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.741 [2024-12-14T06:53:46.733Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:32.741 nvme0n1 : 2.00 9812.03 1226.50 0.00 0.00 1627.86 547.37 6136.55 00:23:32.741 [2024-12-14T06:53:46.733Z] =================================================================================================================== 00:23:32.741 [2024-12-14T06:53:46.733Z] Total : 9812.03 1226.50 0.00 0.00 1627.86 547.37 6136.55 00:23:32.741 0 00:23:32.741 06:53:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:32.741 06:53:46 -- host/digest.sh@92 -- # get_accel_stats 00:23:32.741 06:53:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:32.741 06:53:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:32.741 06:53:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:32.741 | select(.opcode=="crc32c") 00:23:32.741 | "\(.module_name) \(.executed)"' 00:23:32.741 06:53:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:32.741 06:53:46 -- host/digest.sh@93 -- # exp_module=software 00:23:32.741 06:53:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:32.741 06:53:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:32.741 06:53:46 -- host/digest.sh@97 -- # killprocess 86960 00:23:32.741 06:53:46 -- common/autotest_common.sh@936 -- # '[' -z 86960 ']' 00:23:32.741 06:53:46 -- common/autotest_common.sh@940 -- # kill -0 86960 00:23:32.741 06:53:46 -- common/autotest_common.sh@941 -- # uname 00:23:32.741 06:53:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:32.741 06:53:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86960 00:23:32.741 06:53:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:32.741 06:53:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:32.741 killing process with pid 86960 00:23:32.741 06:53:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86960' 00:23:32.741 Received shutdown signal, test time was about 2.000000 seconds 00:23:32.741 00:23:32.741 Latency(us) 00:23:32.741 [2024-12-14T06:53:46.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.741 [2024-12-14T06:53:46.733Z] =================================================================================================================== 00:23:32.741 [2024-12-14T06:53:46.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.741 06:53:46 -- common/autotest_common.sh@955 -- # kill 86960 00:23:32.741 06:53:46 -- common/autotest_common.sh@960 -- # wait 86960 00:23:33.000 06:53:46 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:23:33.000 06:53:46 -- host/digest.sh@77 -- # local rw bs qd 00:23:33.000 06:53:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:33.000 06:53:46 -- host/digest.sh@80 -- # rw=randwrite 00:23:33.000 06:53:46 -- host/digest.sh@80 -- # bs=4096 00:23:33.000 06:53:46 -- host/digest.sh@80 -- # qd=128 00:23:33.000 06:53:46 -- host/digest.sh@82 -- # bperfpid=87056 00:23:33.000 06:53:46 -- host/digest.sh@83 -- # waitforlisten 87056 /var/tmp/bperf.sock 00:23:33.000 06:53:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:33.000 06:53:46 -- common/autotest_common.sh@829 -- # '[' -z 87056 ']' 00:23:33.000 06:53:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.000 06:53:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.000 06:53:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.000 06:53:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.000 06:53:46 -- common/autotest_common.sh@10 -- # set +x 00:23:33.000 [2024-12-14 06:53:46.925923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.000 [2024-12-14 06:53:46.926064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87056 ] 00:23:33.259 [2024-12-14 06:53:47.060117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.259 [2024-12-14 06:53:47.183498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.195 06:53:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.195 06:53:47 -- common/autotest_common.sh@862 -- # return 0 00:23:34.195 06:53:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:34.195 06:53:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:34.195 06:53:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:34.195 06:53:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.195 06:53:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.763 nvme0n1 00:23:34.763 06:53:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:34.763 06:53:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.763 Running I/O for 2 seconds... 00:23:36.665 00:23:36.665 Latency(us) 00:23:36.665 [2024-12-14T06:53:50.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.665 [2024-12-14T06:53:50.657Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:36.665 nvme0n1 : 2.00 27461.11 107.27 0.00 0.00 4656.34 1921.40 11498.59 00:23:36.665 [2024-12-14T06:53:50.657Z] =================================================================================================================== 00:23:36.665 [2024-12-14T06:53:50.657Z] Total : 27461.11 107.27 0.00 0.00 4656.34 1921.40 11498.59 00:23:36.665 0 00:23:36.665 06:53:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:36.924 06:53:50 -- host/digest.sh@92 -- # get_accel_stats 00:23:36.924 06:53:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:36.924 06:53:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:36.924 06:53:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:36.924 | select(.opcode=="crc32c") 00:23:36.924 | "\(.module_name) \(.executed)"' 00:23:37.182 06:53:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:37.182 06:53:50 -- host/digest.sh@93 -- # exp_module=software 00:23:37.182 06:53:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:37.182 06:53:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:37.182 06:53:50 -- host/digest.sh@97 -- # killprocess 87056 00:23:37.183 06:53:50 -- common/autotest_common.sh@936 -- # '[' -z 87056 ']' 00:23:37.183 06:53:50 -- common/autotest_common.sh@940 -- # kill -0 87056 00:23:37.183 06:53:50 -- common/autotest_common.sh@941 -- # uname 00:23:37.183 06:53:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.183 06:53:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87056 00:23:37.183 06:53:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:37.183 06:53:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:37.183 killing process with pid 87056 00:23:37.183 06:53:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87056' 00:23:37.183 Received shutdown signal, test time was about 2.000000 seconds 00:23:37.183 00:23:37.183 Latency(us) 00:23:37.183 [2024-12-14T06:53:51.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.183 [2024-12-14T06:53:51.175Z] =================================================================================================================== 00:23:37.183 [2024-12-14T06:53:51.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.183 06:53:50 -- common/autotest_common.sh@955 -- # kill 87056 00:23:37.183 06:53:50 -- common/autotest_common.sh@960 -- # wait 87056 00:23:37.442 06:53:51 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:37.442 06:53:51 -- host/digest.sh@77 -- # local rw bs qd 00:23:37.442 06:53:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:37.442 06:53:51 -- host/digest.sh@80 -- # rw=randwrite 00:23:37.442 06:53:51 -- host/digest.sh@80 -- # bs=131072 00:23:37.442 06:53:51 -- host/digest.sh@80 -- # qd=16 00:23:37.442 06:53:51 -- host/digest.sh@82 -- # bperfpid=87142 00:23:37.442 06:53:51 -- host/digest.sh@83 -- # waitforlisten 87142 /var/tmp/bperf.sock 00:23:37.442 06:53:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:37.442 06:53:51 -- common/autotest_common.sh@829 -- # '[' -z 87142 ']' 00:23:37.442 06:53:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.442 06:53:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.442 06:53:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.442 06:53:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.442 06:53:51 -- common/autotest_common.sh@10 -- # set +x 00:23:37.442 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.442 Zero copy mechanism will not be used. 00:23:37.442 [2024-12-14 06:53:51.345498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.442 [2024-12-14 06:53:51.345613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87142 ] 00:23:37.701 [2024-12-14 06:53:51.479995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.701 [2024-12-14 06:53:51.591808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.637 06:53:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.637 06:53:52 -- common/autotest_common.sh@862 -- # return 0 00:23:38.637 06:53:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:38.637 06:53:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:38.637 06:53:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:38.896 06:53:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.896 06:53:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.155 nvme0n1 00:23:39.155 06:53:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:39.155 06:53:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.155 Zero copy mechanism will not be used. 00:23:39.155 Running I/O for 2 seconds... 00:23:41.688 00:23:41.688 Latency(us) 00:23:41.688 [2024-12-14T06:53:55.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.688 [2024-12-14T06:53:55.680Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:41.688 nvme0n1 : 2.00 8696.76 1087.10 0.00 0.00 1835.35 1526.69 9592.09 00:23:41.688 [2024-12-14T06:53:55.680Z] =================================================================================================================== 00:23:41.688 [2024-12-14T06:53:55.680Z] Total : 8696.76 1087.10 0.00 0.00 1835.35 1526.69 9592.09 00:23:41.688 0 00:23:41.688 06:53:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:41.688 06:53:55 -- host/digest.sh@92 -- # get_accel_stats 00:23:41.688 06:53:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:41.688 06:53:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:41.688 06:53:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:41.688 | select(.opcode=="crc32c") 00:23:41.688 | "\(.module_name) \(.executed)"' 00:23:41.688 06:53:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:41.688 06:53:55 -- host/digest.sh@93 -- # exp_module=software 00:23:41.688 06:53:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:41.688 06:53:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:41.688 06:53:55 -- host/digest.sh@97 -- # killprocess 87142 00:23:41.688 06:53:55 -- common/autotest_common.sh@936 -- # '[' -z 87142 ']' 00:23:41.688 06:53:55 -- common/autotest_common.sh@940 -- # kill -0 87142 00:23:41.688 06:53:55 -- common/autotest_common.sh@941 -- # uname 00:23:41.688 06:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.688 06:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87142 00:23:41.688 06:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:41.688 06:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:41.688 06:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87142' 00:23:41.688 killing process with pid 87142 00:23:41.688 Received shutdown signal, test time was about 2.000000 seconds 00:23:41.688 00:23:41.688 Latency(us) 00:23:41.688 [2024-12-14T06:53:55.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.688 [2024-12-14T06:53:55.680Z] =================================================================================================================== 00:23:41.688 [2024-12-14T06:53:55.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.688 06:53:55 -- common/autotest_common.sh@955 -- # kill 87142 00:23:41.688 06:53:55 -- common/autotest_common.sh@960 -- # wait 87142 00:23:41.947 06:53:55 -- host/digest.sh@126 -- # killprocess 86820 00:23:41.947 06:53:55 -- common/autotest_common.sh@936 -- # '[' -z 86820 ']' 00:23:41.947 06:53:55 -- common/autotest_common.sh@940 -- # kill -0 86820 00:23:41.947 06:53:55 -- common/autotest_common.sh@941 -- # uname 00:23:41.947 06:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.947 06:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86820 00:23:41.947 06:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:41.947 06:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:41.947 killing process with pid 86820 00:23:41.947 06:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86820' 00:23:41.947 06:53:55 -- common/autotest_common.sh@955 -- # kill 86820 00:23:41.947 06:53:55 -- common/autotest_common.sh@960 -- # wait 86820 00:23:42.219 00:23:42.219 real 0m19.562s 00:23:42.219 user 0m36.919s 00:23:42.219 sys 0m4.978s 00:23:42.219 06:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:42.219 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.219 ************************************ 00:23:42.219 END TEST nvmf_digest_clean 00:23:42.219 ************************************ 00:23:42.219 06:53:56 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:42.219 06:53:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:42.219 06:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.219 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.219 ************************************ 00:23:42.219 START TEST nvmf_digest_error 00:23:42.219 ************************************ 00:23:42.219 06:53:56 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:42.219 06:53:56 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:42.219 06:53:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:42.219 06:53:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.219 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.219 06:53:56 -- nvmf/common.sh@469 -- # nvmfpid=87260 00:23:42.219 06:53:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:42.219 06:53:56 -- nvmf/common.sh@470 -- # waitforlisten 87260 00:23:42.219 06:53:56 -- common/autotest_common.sh@829 -- # '[' -z 87260 ']' 00:23:42.219 06:53:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.219 06:53:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.219 06:53:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.219 06:53:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.219 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.508 [2024-12-14 06:53:56.223641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:42.508 [2024-12-14 06:53:56.223739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.508 [2024-12-14 06:53:56.355715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.508 [2024-12-14 06:53:56.489383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:42.508 [2024-12-14 06:53:56.489531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.508 [2024-12-14 06:53:56.489543] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.508 [2024-12-14 06:53:56.489552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.508 [2024-12-14 06:53:56.489576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.443 06:53:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.443 06:53:57 -- common/autotest_common.sh@862 -- # return 0 00:23:43.444 06:53:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:43.444 06:53:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.444 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.444 06:53:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.444 06:53:57 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:43.444 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.444 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.444 [2024-12-14 06:53:57.282234] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:43.444 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.444 06:53:57 -- host/digest.sh@104 -- # common_target_config 00:23:43.444 06:53:57 -- host/digest.sh@43 -- # rpc_cmd 00:23:43.444 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.444 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.444 null0 00:23:43.444 [2024-12-14 06:53:57.427164] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.703 [2024-12-14 06:53:57.451349] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.703 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.703 06:53:57 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:43.703 06:53:57 -- host/digest.sh@54 -- # local rw bs qd 00:23:43.703 06:53:57 -- host/digest.sh@56 -- # rw=randread 00:23:43.703 06:53:57 -- host/digest.sh@56 -- # bs=4096 00:23:43.703 06:53:57 -- host/digest.sh@56 -- # qd=128 00:23:43.703 06:53:57 -- host/digest.sh@58 -- # bperfpid=87306 00:23:43.703 06:53:57 -- host/digest.sh@60 -- # waitforlisten 87306 /var/tmp/bperf.sock 00:23:43.703 06:53:57 -- common/autotest_common.sh@829 -- # '[' -z 87306 ']' 00:23:43.703 06:53:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:43.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:43.703 06:53:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.703 06:53:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:43.703 06:53:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:43.703 06:53:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.703 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.703 [2024-12-14 06:53:57.509025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:43.703 [2024-12-14 06:53:57.509156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87306 ] 00:23:43.703 [2024-12-14 06:53:57.645995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.961 [2024-12-14 06:53:57.796452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.529 06:53:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.529 06:53:58 -- common/autotest_common.sh@862 -- # return 0 00:23:44.529 06:53:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:44.529 06:53:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:44.787 06:53:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:44.787 06:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.787 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 06:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.787 06:53:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:44.787 06:53:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.045 nvme0n1 00:23:45.045 06:53:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:45.045 06:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.045 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.045 06:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.045 06:53:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:45.045 06:53:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:45.304 Running I/O for 2 seconds... 00:23:45.304 [2024-12-14 06:53:59.163306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.163393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.163408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.177307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.177383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.177396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.188722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.188781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.188810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.199369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.199428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.199457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.210617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.210702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.220996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.221050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.221078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.231340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.231425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.240461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.240517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.240545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.251371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.251427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.251461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.261097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.261152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.261180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.272790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.272875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.282072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.282126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.282189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.304 [2024-12-14 06:53:59.293642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.304 [2024-12-14 06:53:59.293699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.304 [2024-12-14 06:53:59.293727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.563 [2024-12-14 06:53:59.308244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.563 [2024-12-14 06:53:59.308319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.563 [2024-12-14 06:53:59.308348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.563 [2024-12-14 06:53:59.320833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.563 [2024-12-14 06:53:59.320896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.563 [2024-12-14 06:53:59.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.563 [2024-12-14 06:53:59.335036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.563 [2024-12-14 06:53:59.335094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.563 [2024-12-14 06:53:59.335123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.563 [2024-12-14 06:53:59.343770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.563 [2024-12-14 06:53:59.343827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.563 [2024-12-14 06:53:59.343855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.563 [2024-12-14 06:53:59.357146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.563 [2024-12-14 06:53:59.357203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.563 [2024-12-14 06:53:59.357231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.365744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.365820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.365848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.375989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.376043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.376071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.386234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.386293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.386322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.398918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.399016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.399046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.410102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.410192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.410221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.421062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.421135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.421164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.432602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.432686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.442968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.443036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.443065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.455898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.455998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.456013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.467489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.467546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.467574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.478425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.478488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.478517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.488540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.488597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.502444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.502517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.502546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.512315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.512411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.524253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.524310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.524338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.537045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.537101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.537130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.564 [2024-12-14 06:53:59.546980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.564 [2024-12-14 06:53:59.547047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.564 [2024-12-14 06:53:59.547076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.560205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.560262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.560291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.574201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.574259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.574287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.587122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.587178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.587206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.599385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.599440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.599467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.611825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.611881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.611909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.621121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.621176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.621203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.634169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.634225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.634253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.647666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.647723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.823 [2024-12-14 06:53:59.647751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.823 [2024-12-14 06:53:59.657989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.823 [2024-12-14 06:53:59.658054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.658082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.668987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.669053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.680657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.680719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.680747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.691834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.691896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.705382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.705439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.705467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.718408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.718485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.718513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.731007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.731092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.731104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.744587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.744643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.744671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.753761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.753817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.753845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.766358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.766416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.766444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.779147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.779205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.779233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.792010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.792065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.792093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.824 [2024-12-14 06:53:59.806367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:45.824 [2024-12-14 06:53:59.806426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.824 [2024-12-14 06:53:59.806456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.820354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.820412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.820441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.834058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.834125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.834186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.846909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.846990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.847021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.861162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.861218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.861246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.874630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.874690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.874718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.888261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.888319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.888347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.897666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.897724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.897752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.911676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.911733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.928444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.928515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.928543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.944492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.944549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.957502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.957575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.957603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.969446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.969512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.969540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.980677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.980719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.980747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:53:59.991449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:53:59.991506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:53:59.991534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:54:00.001246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:54:00.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:54:00.001348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:54:00.014669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:54:00.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:54:00.014769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:54:00.024831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:54:00.024902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:54:00.024917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:54:00.037007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:54:00.037065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.083 [2024-12-14 06:54:00.037094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.083 [2024-12-14 06:54:00.050545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.083 [2024-12-14 06:54:00.050604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.084 [2024-12-14 06:54:00.050633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.084 [2024-12-14 06:54:00.065267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.084 [2024-12-14 06:54:00.065354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.084 [2024-12-14 06:54:00.065397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.079222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.079278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.093064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.093122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.093149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.106381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.106438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.106451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.122038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.122091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.122121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.137503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.137560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.137588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.148246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.148346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.161294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.161350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.161379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.175077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.175132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.175161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.188701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.188760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.188789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.202817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.202876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.202897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.218043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.218128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.231590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.231647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.231675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.243070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.243158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.243186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.254161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.254204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.254217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.268847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.268889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.268902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.282687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.282744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.282773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.296524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.296581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.296610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.310868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.310925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.310953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.343 [2024-12-14 06:54:00.324747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.343 [2024-12-14 06:54:00.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.343 [2024-12-14 06:54:00.324816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.602 [2024-12-14 06:54:00.338772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.602 [2024-12-14 06:54:00.338829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.602 [2024-12-14 06:54:00.338858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.602 [2024-12-14 06:54:00.352265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.352337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.360988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.361083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.373422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.373478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.373507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.387359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.387428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.387457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.401412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.416309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.416383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.416411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.428786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.428842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.428870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.439512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.439567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.439595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.449220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.449276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.449304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.460038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.460119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.460146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.469166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.469205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.469233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.481231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.481288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.481316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.493263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.493353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.493381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.504291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.504348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.504376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.515618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.515675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.515703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.526422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.526494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.526521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.537357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.537412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.537440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.550582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.550638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.550665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.559892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.559974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.559987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.572307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.572364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.572377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.603 [2024-12-14 06:54:00.582610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.603 [2024-12-14 06:54:00.582667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.603 [2024-12-14 06:54:00.582680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.593364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.593427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.593455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.605477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.605533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.605561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.618486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.618542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.618569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.631978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.632049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.632078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.645997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.646054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.646067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.658703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.658762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.668983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.669052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.682294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.682351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.682381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.695573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.695629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.695658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.708576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.708632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.708660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.721673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.721730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.721758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.734084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.734161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.734174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.747932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.748015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.748045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.760332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.760401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.760429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.772668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.772725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.772753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.785760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.785816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.785844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.798864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.798922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.863 [2024-12-14 06:54:00.798950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.863 [2024-12-14 06:54:00.812077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.863 [2024-12-14 06:54:00.812134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.864 [2024-12-14 06:54:00.812163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.864 [2024-12-14 06:54:00.825098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.864 [2024-12-14 06:54:00.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.864 [2024-12-14 06:54:00.825191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.864 [2024-12-14 06:54:00.838379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.864 [2024-12-14 06:54:00.838438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.864 [2024-12-14 06:54:00.838482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.864 [2024-12-14 06:54:00.850666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:46.864 [2024-12-14 06:54:00.850721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.864 [2024-12-14 06:54:00.850748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.863131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.863187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.863217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.876263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.876318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.876346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.890254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.890341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.904884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.904967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.904997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.917966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.918046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.918075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.930193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.930250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.930280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.941405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.941476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.941504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.953629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.953684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.953726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.966344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.966385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.966429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.977173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.977232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.977260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:00.989323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:00.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:00.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:01.001767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:01.001826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:01.001854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:01.013002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:01.013058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.123 [2024-12-14 06:54:01.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.123 [2024-12-14 06:54:01.023193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.123 [2024-12-14 06:54:01.023251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.023280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.033368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.033425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.033453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.043582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.053422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.053477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.064918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.064981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.065011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.074676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.074732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.074759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.087376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.087433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.087460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.098916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.098984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.099013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.124 [2024-12-14 06:54:01.111398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.124 [2024-12-14 06:54:01.111453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.124 [2024-12-14 06:54:01.111481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.383 [2024-12-14 06:54:01.125170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.383 [2024-12-14 06:54:01.125228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.383 [2024-12-14 06:54:01.125256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.383 [2024-12-14 06:54:01.138410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178cf50) 00:23:47.383 [2024-12-14 06:54:01.138484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.383 [2024-12-14 06:54:01.138513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.383 00:23:47.383 Latency(us) 00:23:47.383 [2024-12-14T06:54:01.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.383 [2024-12-14T06:54:01.375Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:47.383 nvme0n1 : 2.00 20729.68 80.98 0.00 0.00 6169.17 2546.97 20375.74 00:23:47.383 [2024-12-14T06:54:01.375Z] =================================================================================================================== 00:23:47.383 [2024-12-14T06:54:01.375Z] Total : 20729.68 80.98 0.00 0.00 6169.17 2546.97 20375.74 00:23:47.383 0 00:23:47.383 06:54:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:47.383 06:54:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:47.383 06:54:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:47.383 06:54:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:47.383 | .driver_specific 00:23:47.383 | .nvme_error 00:23:47.383 | .status_code 00:23:47.383 | .command_transient_transport_error' 00:23:47.642 06:54:01 -- host/digest.sh@71 -- # (( 162 > 0 )) 00:23:47.642 06:54:01 -- host/digest.sh@73 -- # killprocess 87306 00:23:47.642 06:54:01 -- common/autotest_common.sh@936 -- # '[' -z 87306 ']' 00:23:47.642 06:54:01 -- common/autotest_common.sh@940 -- # kill -0 87306 00:23:47.642 06:54:01 -- common/autotest_common.sh@941 -- # uname 00:23:47.642 06:54:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:47.642 06:54:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87306 00:23:47.642 06:54:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:47.642 killing process with pid 87306 00:23:47.642 06:54:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:47.642 06:54:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87306' 00:23:47.642 Received shutdown signal, test time was about 2.000000 seconds 00:23:47.642 00:23:47.642 Latency(us) 00:23:47.642 [2024-12-14T06:54:01.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.642 [2024-12-14T06:54:01.634Z] =================================================================================================================== 00:23:47.642 [2024-12-14T06:54:01.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.642 06:54:01 -- common/autotest_common.sh@955 -- # kill 87306 00:23:47.642 06:54:01 -- common/autotest_common.sh@960 -- # wait 87306 00:23:47.900 06:54:01 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:47.900 06:54:01 -- host/digest.sh@54 -- # local rw bs qd 00:23:47.900 06:54:01 -- host/digest.sh@56 -- # rw=randread 00:23:47.900 06:54:01 -- host/digest.sh@56 -- # bs=131072 00:23:47.900 06:54:01 -- host/digest.sh@56 -- # qd=16 00:23:47.900 06:54:01 -- host/digest.sh@58 -- # bperfpid=87398 00:23:47.900 06:54:01 -- host/digest.sh@60 -- # waitforlisten 87398 /var/tmp/bperf.sock 00:23:47.900 06:54:01 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:47.900 06:54:01 -- common/autotest_common.sh@829 -- # '[' -z 87398 ']' 00:23:47.900 06:54:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:47.900 06:54:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:47.900 06:54:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:47.900 06:54:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.900 06:54:01 -- common/autotest_common.sh@10 -- # set +x 00:23:48.158 [2024-12-14 06:54:01.905217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:48.158 [2024-12-14 06:54:01.905326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87398 ] 00:23:48.158 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:48.158 Zero copy mechanism will not be used. 00:23:48.158 [2024-12-14 06:54:02.044257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.417 [2024-12-14 06:54:02.168532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.984 06:54:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.984 06:54:02 -- common/autotest_common.sh@862 -- # return 0 00:23:48.984 06:54:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:48.984 06:54:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:49.243 06:54:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:49.243 06:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.243 06:54:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.243 06:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.243 06:54:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.243 06:54:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.501 nvme0n1 00:23:49.761 06:54:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:49.761 06:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.761 06:54:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.761 06:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.761 06:54:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:49.761 06:54:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:49.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:49.761 Zero copy mechanism will not be used. 00:23:49.761 Running I/O for 2 seconds... 00:23:49.761 [2024-12-14 06:54:03.610656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.610748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.610773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.614349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.614392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.614406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.618215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.618274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.621841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.621892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.621920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.625877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.625929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.625967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.629659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.629712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.629755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.633632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.633685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.633715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.637124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.637163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.637191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.641082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.641124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.641153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.644633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.644674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.644702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.648306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.648507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.648548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.652189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.652230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.652260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.656266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.656519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.656553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.660798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.761 [2024-12-14 06:54:03.660853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.761 [2024-12-14 06:54:03.660881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.761 [2024-12-14 06:54:03.665332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.665519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.665554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.669152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.669192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.669222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.672662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.672701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.672730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.676702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.676925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.681199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.681422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.681571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.685534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.685767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.689491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.689529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.689558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.692096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.692133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.692161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.696121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.696158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.696187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.699807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.699846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.699858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.703841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.703881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.703909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.707263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.707312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.707342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.711289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.711327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.711340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.715184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.715222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.715235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.718737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.718790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.718820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.722535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.722575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.726364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.726435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.726479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.730319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.730359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.730388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.734361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.734404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.734433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.738276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.738320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.741990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.742210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.742229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.746218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.746259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.746289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.762 [2024-12-14 06:54:03.750245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:49.762 [2024-12-14 06:54:03.750287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.762 [2024-12-14 06:54:03.750301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.753015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.753080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.756817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.756856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.756885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.760363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.760402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.760431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.764154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.764191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.764220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.768060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.768099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.768112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.771552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.771618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.774885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.774939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.774981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.778942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.779004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.783161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.783202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.783231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.786952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.787180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.787213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.790732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.790981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.791000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.794521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.023 [2024-12-14 06:54:03.794559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.023 [2024-12-14 06:54:03.794598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.023 [2024-12-14 06:54:03.798540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.798608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.802063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.802275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.802295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.805430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.805638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.805785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.809802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.810022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.810194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.813703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.813909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.814166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.817411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.817601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.817636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.821614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.821653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.821681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.824599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.824812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.824845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.828628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.831974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.832018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.832046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.835513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.835734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.839825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.840056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.840089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.843811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.844028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.844062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.847741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.847783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.851177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.851216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.851229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.854600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.854637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.854662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.858195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.858238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.858252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.862073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.862110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.862179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.865835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.866039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.866074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.870204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.870263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.874531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.874568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.878240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.878282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.878297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.882244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.882284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.882313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.885916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.886186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.886204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.889803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.889998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.890032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.893651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.893688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.893719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.897406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.024 [2024-12-14 06:54:03.897604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.024 [2024-12-14 06:54:03.897640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.024 [2024-12-14 06:54:03.901894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.902099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.905776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.905809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.905837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.909449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.909643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.909782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.913370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.913574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.916914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.917138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.917262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.920872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.921078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.921202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.925109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.925346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.925542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.929510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.929720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.929869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.933316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.933503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.933651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.937282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.937482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.937620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.940742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.940967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.941102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.944857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.945069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.945208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.948603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.948783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.948815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.951921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.952001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.952033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.955501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.955539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.958609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.958645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.958673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.962508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.962545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.962574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.966109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.966166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.966180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.969171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.969235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.973031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.973065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.973093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.976897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.976933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.976971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.980392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.980430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.980458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.983837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.983874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.983902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.987405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.987593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.987625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.991445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.991484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.991513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.995267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.025 [2024-12-14 06:54:03.995497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.025 [2024-12-14 06:54:03.999695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.025 [2024-12-14 06:54:03.999735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.026 [2024-12-14 06:54:03.999763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.026 [2024-12-14 06:54:04.002990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.026 [2024-12-14 06:54:04.003198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.026 [2024-12-14 06:54:04.003233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.026 [2024-12-14 06:54:04.007567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.026 [2024-12-14 06:54:04.007759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.026 [2024-12-14 06:54:04.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.026 [2024-12-14 06:54:04.011531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.026 [2024-12-14 06:54:04.011721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.026 [2024-12-14 06:54:04.011769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.015785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.016004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.016038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.019732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.019763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.019792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.023543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.023582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.023610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.027641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.027852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.028001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.032382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.032421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.032450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.035657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.035692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.035720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.039290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.039357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.042018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.042053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.042081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.045077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.045127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.045156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.048455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.048491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.048519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.051586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.051623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.051651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.055037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.055083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.055095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.058095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.058128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.058198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.061792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.062000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.062017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.065076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.065107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.065134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.068984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.069029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.069040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.072330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.072392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.075788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.075826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.075854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.079507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.079543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.079572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.082692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.082730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.082758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.086503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.086540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.086569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.090006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.287 [2024-12-14 06:54:04.090051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.287 [2024-12-14 06:54:04.090064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.287 [2024-12-14 06:54:04.094207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.094248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.094260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.097531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.097567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.100910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.101153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.101175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.104732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.104770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.104798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.108105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.108141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.108153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.111137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.111331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.111348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.114840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.115039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.115072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.118553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.118593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.118622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.122752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.122994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.123178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.126752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.126978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.126996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.130566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.130755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.130788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.134007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.134066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.134080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.137291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.137326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.137371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.140720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.140757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.140784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.144102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.144138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.144165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.147645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.147682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.147710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.151321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.151544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.151576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.154986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.155233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.158569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.158757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.158789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.162231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.162270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.162283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.165565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.165746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.165778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.168200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.168231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.168259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.171840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.171876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.175442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.179275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.179313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.179326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.182498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.288 [2024-12-14 06:54:04.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.288 [2024-12-14 06:54:04.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.288 [2024-12-14 06:54:04.186284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.186323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.186352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.189637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.189672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.193498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.193680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.193712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.197148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.197342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.197378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.200590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.200625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.200653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.204361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.204412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.204440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.207996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.208025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.208037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.211154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.211192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.211220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.214601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.214793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.214825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.218225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.218265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.218293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.221706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.221769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.224729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.224765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.224792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.228510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.228547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.232176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.232212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.232256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.236067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.236116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.236145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.239295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.239335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.239363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.242857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.243065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.243099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.247086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.247123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.247151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.250702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.250915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.250932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.254389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.254627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.257926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.258216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.261865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.262112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.262250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.265965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.266209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.266353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.269625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.269806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.269990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.289 [2024-12-14 06:54:04.273965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.289 [2024-12-14 06:54:04.274198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.289 [2024-12-14 06:54:04.274416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.278665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.278927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.279123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.282520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.282717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.282870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.286564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.286809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.287065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.290646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.290685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.290715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.294376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.294414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.294442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.297699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.297736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.297764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.301084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.301118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.304673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.304709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.308047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.308086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.311314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.311353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.311381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.315135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.315173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.315202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.318249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.318288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.318317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.550 [2024-12-14 06:54:04.322050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.550 [2024-12-14 06:54:04.322083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.550 [2024-12-14 06:54:04.322095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.325428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.325465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.325494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.329406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.329444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.332865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.332931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.336539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.336586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.336614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.340344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.340559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.344453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.344490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.344518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.348462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.348654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.348671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.352479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.352669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.352702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.356410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.356447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.356475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.359514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.359706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.359740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.363402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.363468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.367161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.367226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.370746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.370784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.370813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.374121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.374183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.374196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.377500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.377536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.377565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.380884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.380920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.380964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.384659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.384696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.384724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.387766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.388000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.391791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.391830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.391859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.396232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.396270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.396298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.399901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.399966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.399980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.403693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.403732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.403761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.408005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.408043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.408071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.411696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.411735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.411780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.414932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.414996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.415010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.418092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.551 [2024-12-14 06:54:04.418129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.551 [2024-12-14 06:54:04.418182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.551 [2024-12-14 06:54:04.421572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.421609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.421651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.424468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.424502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.428135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.428315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.428364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.431913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.432134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.432151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.435520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.435560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.435589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.439047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.439263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.439385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.443081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.443135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.443163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.446964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.447194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.447211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.450628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.450820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.450853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.454767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.454989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.455007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.459235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.459286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.459314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.462900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.462937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.462977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.467364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.467416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.471184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.471230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.471271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.475077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.475114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.475142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.479249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.479287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.479315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.483290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.483328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.483357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.487468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.487503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.487532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.491528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.491567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.491595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.495521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.495559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.495587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.499697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.499736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.499764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.503916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.503988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.504002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.508173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.508208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.508236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.512271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.512309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.512322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.516163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.516202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.516230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.520077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.520116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.520144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.552 [2024-12-14 06:54:04.524181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.552 [2024-12-14 06:54:04.524219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.552 [2024-12-14 06:54:04.524247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.553 [2024-12-14 06:54:04.528201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.553 [2024-12-14 06:54:04.528239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.553 [2024-12-14 06:54:04.528267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.553 [2024-12-14 06:54:04.532165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.553 [2024-12-14 06:54:04.532201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.553 [2024-12-14 06:54:04.532214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.553 [2024-12-14 06:54:04.535327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.553 [2024-12-14 06:54:04.535366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.553 [2024-12-14 06:54:04.535394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.553 [2024-12-14 06:54:04.538801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.553 [2024-12-14 06:54:04.538840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.553 [2024-12-14 06:54:04.538869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.542547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.542585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.813 [2024-12-14 06:54:04.542613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.546177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.546217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.813 [2024-12-14 06:54:04.546231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.549515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.549567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.813 [2024-12-14 06:54:04.549596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.553322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.553360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.813 [2024-12-14 06:54:04.553404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.556446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.556664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.813 [2024-12-14 06:54:04.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.813 [2024-12-14 06:54:04.560265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.813 [2024-12-14 06:54:04.560304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.560332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.563480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.563673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.563707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.567168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.567231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.570570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.570607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.570635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.573689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.573725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.573753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.577262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.577299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.577327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.580325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.580502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.580536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.583915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.584107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.584140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.588379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.592311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.592350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.592394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.596015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.596053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.596081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.599557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.599756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.599789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.603494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.603686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.603729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.607348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.607554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.607571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.611185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.611232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.611260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.615000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.615208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.615227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.618923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.618990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.622968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.623045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.626833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.626872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.626901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.630991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.631037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.631065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.634617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.634653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.634682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.638519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.638585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.641877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.641914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.641942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.645142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.645178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.645206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.648792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.648830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.648859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.652110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.652146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.652175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.655829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.656058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.656076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.814 [2024-12-14 06:54:04.659723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.814 [2024-12-14 06:54:04.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.814 [2024-12-14 06:54:04.659804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.663077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.663115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.663143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.666846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.666886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.666914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.670577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.670768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.670800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.674182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.674211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.674223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.677757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.677795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.677825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.680764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.680801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.680830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.684789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.684826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.684855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.688427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.688466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.688494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.691876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.691946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.695548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.695588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.695616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.699446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.699486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.699515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.702896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.702935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.702973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.706509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.706559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.706600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.710227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.710264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.710277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.714074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.714106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.717292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.717328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.717372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.720585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.720621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.720649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.724245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.724480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.724601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.728696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.728943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.729074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.732343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.732384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.732427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.735512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.735551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.735579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.739332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.739372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.739385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.743096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.743134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.743163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.746881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.747089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.747122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.750531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.750600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.754781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.815 [2024-12-14 06:54:04.754820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.815 [2024-12-14 06:54:04.754848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.815 [2024-12-14 06:54:04.758568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.758793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.758826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.763083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.763308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.767066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.767308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.767440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.771483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.771685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.771825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.775962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.776048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.776063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.779365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.779404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.779432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.782612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.782650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.787227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.787451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.787555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.791510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.791704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.791736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.795231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.795267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.795279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.798646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.798835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.798867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.816 [2024-12-14 06:54:04.802089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:50.816 [2024-12-14 06:54:04.802125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.816 [2024-12-14 06:54:04.802189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.805507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.805571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.809429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.812933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.813315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.817609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.817850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.817993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.821562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.821756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.821958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.825427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.825609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.825755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.829546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.829747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.829961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.833831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.834038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.834214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.837962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.838192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.838354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.841801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.842013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.842244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.846200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.846356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.846389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.850645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.850712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.854404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.854460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.854487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.858356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.858396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.858409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.861595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.861785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.861818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.865364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.865400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.865429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.868514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.868550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.868579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.872596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.872631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.875981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.876025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.876035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.879821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.880026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.880060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.884414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.884625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.884657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.888399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.888589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.888621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.892174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.892212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.892241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.895870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.896104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.077 [2024-12-14 06:54:04.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.077 [2024-12-14 06:54:04.898634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.077 [2024-12-14 06:54:04.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.898711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.902009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.902043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.902072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.905519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.905556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.905584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.908770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.908806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.908835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.912413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.912449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.912476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.915699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.915905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.915938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.919165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.919199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.919227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.922760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.922836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.922865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.926678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.926732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.926761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.930402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.930461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.930474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.934295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.934340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.934355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.938115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.938177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.938207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.942285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.942328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.942358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.946278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.946321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.946335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.950089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.950125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.950163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.953963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.954022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.957334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.957372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.957384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.961210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.961246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.961274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.964362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.964398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.964441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.968021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.968065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.968094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.971805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.971843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.975555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.975592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.975620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.979600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.979793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.979826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.983815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.983851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.983880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.987767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.987803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.987831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.991755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.991793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.991821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.995415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.995453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.995481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.078 [2024-12-14 06:54:04.998399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.078 [2024-12-14 06:54:04.998628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.078 [2024-12-14 06:54:04.998660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.002300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.002562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.002579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.005730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.005760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.005788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.008956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.008989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.009017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.012337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.012373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.012400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.016299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.016354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.016366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.019821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.019888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.024078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.024115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.027596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.027634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.027662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.031586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.031813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.031849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.036383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.036422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.036450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.040034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.040078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.040107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.043871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.043907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.043935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.047210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.047246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.047259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.050339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.050377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.050390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.054165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.054199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.054227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.057374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.057407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.057435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.061367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.061402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.061430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.079 [2024-12-14 06:54:05.064546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.079 [2024-12-14 06:54:05.064579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-12-14 06:54:05.064608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.068099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.068135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.068163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.071393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.071460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.074880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.074919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.078842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.078896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.078925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.082405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.082520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.085820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.086011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.086044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.089623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.089803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.089836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.092871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.092910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.092938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.096677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.096722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.096763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.100832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.100870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.100899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.104245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.104284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.104327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.107988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.108050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.108065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.112076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.112139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.112157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.115561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.115760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.115795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.119731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.119931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.119990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.123835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.124043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.124061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.127986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.128035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.128064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.131454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.131492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.131520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.135061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.135099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.135128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.138945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.139007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.139020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.142948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.143011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.143025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.146668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.339 [2024-12-14 06:54:05.146756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.339 [2024-12-14 06:54:05.146802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.339 [2024-12-14 06:54:05.150398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.150454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.153829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.154039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.154073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.158060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.158097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.158126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.161417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.161600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.161632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.164728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.164761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.164788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.169027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.169454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.172809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.173015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.173164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.176602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.176786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.176929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.180566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.180792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.184410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.184609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.184739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.188350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.188546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.188681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.191764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.191970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.192119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.195468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.195667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.195806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.200056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.200389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.204450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.204504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.204531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.208583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.208620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.208648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.212827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.213029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.213062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.217045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.217096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.217110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.220972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.221007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.221034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.224894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.224929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.224982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.228397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.228432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.228461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.231945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.232159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.232192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.235271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.235307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.235335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.238811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.238849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.242203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.242237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.242265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.244975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.245019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.245047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.248231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.248265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.248293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.251745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.251952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.255776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.255855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.259049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.259084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.259112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.262499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.262538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.262582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.266502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.266538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.266566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.269668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.269703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.269731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.272917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.273138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.277269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.277307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.277321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.280638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.280676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.280704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.283773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.283987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.284005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.288022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.288239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.288449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.292231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.292430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.292613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.295838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.296039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.296071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.299629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.299819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.299851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.303318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.303367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.306876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.306914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.306942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.310205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.310242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.310254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.313772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.313807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.316958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.317000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.317011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.320724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.320758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.320785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.323928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.324125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.324157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.340 [2024-12-14 06:54:05.328158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.340 [2024-12-14 06:54:05.328207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.340 [2024-12-14 06:54:05.328220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.600 [2024-12-14 06:54:05.331537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.600 [2024-12-14 06:54:05.331728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.600 [2024-12-14 06:54:05.331760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.600 [2024-12-14 06:54:05.335114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.600 [2024-12-14 06:54:05.335151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.600 [2024-12-14 06:54:05.335179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.600 [2024-12-14 06:54:05.338729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.338916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.338948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.341980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.342039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.345206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.345243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.345270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.348603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.348639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.348668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.352145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.352182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.355579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.355617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.355645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.359094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.359130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.359157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.362749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.362986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.366858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.367056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.367090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.370607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.370794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.370826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.374386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.374434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.376928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.377003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.377033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.380380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.380416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.380443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.383532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.383757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.387212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.387249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.387261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.390520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.390560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.390588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.394553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.394729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.394762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.397489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.397519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.397546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.401229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.401264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.401292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.404539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.404577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.404605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.407597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.407782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.407815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.411532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.411597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.415200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.415238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.415266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.418936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.419169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.419186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.422866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.422905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.422933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.426366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.426407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.426435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.429480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.601 [2024-12-14 06:54:05.429517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.601 [2024-12-14 06:54:05.429545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.601 [2024-12-14 06:54:05.432847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.432880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.432909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.436436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.436470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.436498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.439595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.439787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.439821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.443578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.443760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.443792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.447622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.447805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.451048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.451083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.451110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.454697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.454734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.454762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.457904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.458114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.458132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.461695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.462077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.465713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.465908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.465944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.469454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.469490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.469518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.472658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.472692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.472720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.475868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.476064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.476097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.479206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.479243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.479256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.482793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.482881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.486803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.486869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.490654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.490707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.490736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.493348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.493382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.493410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.496993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.497029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.497057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.500472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.500663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.500695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.504776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.504812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.504840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.508706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.508741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.508768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.512671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.512730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.512751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.516419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.516456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.516484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.520074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.520108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.523646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.523681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.523709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.527297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.527334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.602 [2024-12-14 06:54:05.527346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.602 [2024-12-14 06:54:05.530894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.602 [2024-12-14 06:54:05.530930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.530968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.534476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.534530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.534558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.537645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.537679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.537708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.540609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.540795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.540828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.544339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.544525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.544558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.547678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.547717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.547745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.551363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.551399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.551427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.554940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.554999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.555013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.558290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.558332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.562018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.562068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.565169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.565203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.565231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.568198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.568387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.568419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.572168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.572203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.572231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.575711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.575749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.575777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.579990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.580046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.580057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.583672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.583710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.583739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.603 [2024-12-14 06:54:05.587220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.603 [2024-12-14 06:54:05.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.603 [2024-12-14 06:54:05.587285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.862 [2024-12-14 06:54:05.591539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.862 [2024-12-14 06:54:05.591575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.862 [2024-12-14 06:54:05.591604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.862 [2024-12-14 06:54:05.595466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.862 [2024-12-14 06:54:05.595662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.862 [2024-12-14 06:54:05.595695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.862 [2024-12-14 06:54:05.599280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa4e7e0) 00:23:51.862 [2024-12-14 06:54:05.599490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.862 [2024-12-14 06:54:05.599506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.862 [2024-12-14 06:54:05.602839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0 00:23:51.862 Latency(us) 00:23:51.862 [2024-12-14T06:54:05.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.862 [2024-12-14T06:54:05.854Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:51.862 nvme0n1 : 2.00 8340.47 1042.56 0.00 0.00 1915.12 517.59 6315.29 00:23:51.862 [2024-12-14T06:54:05.854Z] =================================================================================================================== 00:23:51.862 [2024-12-14T06:54:05.854Z] Total : 8340.47 1042.56 0.00 0.00 1915.12 517.59 6315.29 00:23:51.862 xa4e7e0) 00:23:51.862 [2024-12-14 06:54:05.603039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.862 [2024-12-14 06:54:05.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.862 0 00:23:51.862 06:54:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:51.862 06:54:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:51.862 06:54:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:51.862 06:54:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:51.862 | .driver_specific 00:23:51.862 | .nvme_error 00:23:51.862 | .status_code 00:23:51.862 | .command_transient_transport_error' 00:23:51.862 06:54:05 -- host/digest.sh@71 -- # (( 538 > 0 )) 00:23:51.862 06:54:05 -- host/digest.sh@73 -- # killprocess 87398 00:23:51.862 06:54:05 -- common/autotest_common.sh@936 -- # '[' -z 87398 ']' 00:23:51.862 06:54:05 -- common/autotest_common.sh@940 -- # kill -0 87398 00:23:51.862 06:54:05 -- common/autotest_common.sh@941 -- # uname 00:23:52.121 06:54:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:52.121 06:54:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87398 00:23:52.121 killing process with pid 87398 00:23:52.121 Received shutdown signal, test time was about 2.000000 seconds 00:23:52.121 00:23:52.121 Latency(us) 00:23:52.121 [2024-12-14T06:54:06.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.121 [2024-12-14T06:54:06.113Z] =================================================================================================================== 00:23:52.121 [2024-12-14T06:54:06.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.121 06:54:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:52.121 06:54:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:52.121 06:54:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87398' 00:23:52.121 06:54:05 -- common/autotest_common.sh@955 -- # kill 87398 00:23:52.121 06:54:05 -- common/autotest_common.sh@960 -- # wait 87398 00:23:52.379 06:54:06 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:52.379 06:54:06 -- host/digest.sh@54 -- # local rw bs qd 00:23:52.379 06:54:06 -- host/digest.sh@56 -- # rw=randwrite 00:23:52.379 06:54:06 -- host/digest.sh@56 -- # bs=4096 00:23:52.380 06:54:06 -- host/digest.sh@56 -- # qd=128 00:23:52.380 06:54:06 -- host/digest.sh@58 -- # bperfpid=87489 00:23:52.380 06:54:06 -- host/digest.sh@60 -- # waitforlisten 87489 /var/tmp/bperf.sock 00:23:52.380 06:54:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:52.380 06:54:06 -- common/autotest_common.sh@829 -- # '[' -z 87489 ']' 00:23:52.380 06:54:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.380 06:54:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.380 06:54:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.380 06:54:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.380 06:54:06 -- common/autotest_common.sh@10 -- # set +x 00:23:52.380 [2024-12-14 06:54:06.301165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:52.380 [2024-12-14 06:54:06.301500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87489 ] 00:23:52.638 [2024-12-14 06:54:06.443143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.638 [2024-12-14 06:54:06.578031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.574 06:54:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.574 06:54:07 -- common/autotest_common.sh@862 -- # return 0 00:23:53.574 06:54:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:53.574 06:54:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:53.834 06:54:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:53.834 06:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.834 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:23:53.834 06:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.834 06:54:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.834 06:54:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.092 nvme0n1 00:23:54.092 06:54:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:54.092 06:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.092 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:23:54.092 06:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.092 06:54:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:54.092 06:54:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.092 Running I/O for 2 seconds... 00:23:54.092 [2024-12-14 06:54:08.062421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190eea00 00:23:54.092 [2024-12-14 06:54:08.063464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.092 [2024-12-14 06:54:08.063526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.092 [2024-12-14 06:54:08.075078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ea680 00:23:54.093 [2024-12-14 06:54:08.076138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.093 [2024-12-14 06:54:08.076187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.087406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fe720 00:23:54.352 [2024-12-14 06:54:08.089308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.089383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.099774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7100 00:23:54.352 [2024-12-14 06:54:08.100709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.100769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.111840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f8a50 00:23:54.352 [2024-12-14 06:54:08.112860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.112907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.124106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e8088 00:23:54.352 [2024-12-14 06:54:08.125024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.125095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.134986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc560 00:23:54.352 [2024-12-14 06:54:08.136311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.136362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.147188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e49b0 00:23:54.352 [2024-12-14 06:54:08.148062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.148150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.161240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e0630 00:23:54.352 [2024-12-14 06:54:08.162755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.162800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.169777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5220 00:23:54.352 [2024-12-14 06:54:08.170126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.170188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.179420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f6458 00:23:54.352 [2024-12-14 06:54:08.179870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.189188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ebfd0 00:23:54.352 [2024-12-14 06:54:08.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.198123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9b30 00:23:54.352 [2024-12-14 06:54:08.199239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.199279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.207290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ef270 00:23:54.352 [2024-12-14 06:54:08.207488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.207507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.217563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f6cc8 00:23:54.352 [2024-12-14 06:54:08.218173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.218207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.226893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ef270 00:23:54.352 [2024-12-14 06:54:08.227279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.227314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.236847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7970 00:23:54.352 [2024-12-14 06:54:08.237188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.237220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.248234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190feb58 00:23:54.352 [2024-12-14 06:54:08.248595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.248635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.352 [2024-12-14 06:54:08.257994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6300 00:23:54.352 [2024-12-14 06:54:08.258289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.352 [2024-12-14 06:54:08.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.267353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7100 00:23:54.353 [2024-12-14 06:54:08.267561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.267581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.279693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e23b8 00:23:54.353 [2024-12-14 06:54:08.281317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.281374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.290853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f0bc0 00:23:54.353 [2024-12-14 06:54:08.292746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.292779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.301567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e38d0 00:23:54.353 [2024-12-14 06:54:08.302535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.302592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.312233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f3e60 00:23:54.353 [2024-12-14 06:54:08.313465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.313499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.324433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f96f8 00:23:54.353 [2024-12-14 06:54:08.325368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.325411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.353 [2024-12-14 06:54:08.335720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ddc00 00:23:54.353 [2024-12-14 06:54:08.336471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.353 [2024-12-14 06:54:08.336503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.612 [2024-12-14 06:54:08.346329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de8a8 00:23:54.613 [2024-12-14 06:54:08.347143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.347174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.355313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e0ea0 00:23:54.613 [2024-12-14 06:54:08.356114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.356146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.365331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e84c0 00:23:54.613 [2024-12-14 06:54:08.367308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.367369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.376174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f57b0 00:23:54.613 [2024-12-14 06:54:08.376955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.377015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.386788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f2d80 00:23:54.613 [2024-12-14 06:54:08.387817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.387851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.399407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f2d80 00:23:54.613 [2024-12-14 06:54:08.400406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.400468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.410082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f3e60 00:23:54.613 [2024-12-14 06:54:08.411676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.421996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7100 00:23:54.613 [2024-12-14 06:54:08.422752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.433672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f1868 00:23:54.613 [2024-12-14 06:54:08.434547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.434587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.443935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de470 00:23:54.613 [2024-12-14 06:54:08.444189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.444212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.457177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fa3a0 00:23:54.613 [2024-12-14 06:54:08.458025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.458106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.468146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc998 00:23:54.613 [2024-12-14 06:54:08.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.469502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.479823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e7c50 00:23:54.613 [2024-12-14 06:54:08.480332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.480369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.494095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6738 00:23:54.613 [2024-12-14 06:54:08.495327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.495363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.504535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f4f40 00:23:54.613 [2024-12-14 06:54:08.505941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.517521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5658 00:23:54.613 [2024-12-14 06:54:08.518496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.518533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.528946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fb048 00:23:54.613 [2024-12-14 06:54:08.529856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.529891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.539447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e4140 00:23:54.613 [2024-12-14 06:54:08.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.540419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.552154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f4f40 00:23:54.613 [2024-12-14 06:54:08.553489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.553524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.564744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e1b48 00:23:54.613 [2024-12-14 06:54:08.566633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.566678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.576572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed920 00:23:54.613 [2024-12-14 06:54:08.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.577622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.588486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7da8 00:23:54.613 [2024-12-14 06:54:08.589411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.613 [2024-12-14 06:54:08.598947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e1710 00:23:54.613 [2024-12-14 06:54:08.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.613 [2024-12-14 06:54:08.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.610665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f0788 00:23:54.873 [2024-12-14 06:54:08.611968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.612013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.622690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e3d08 00:23:54.873 [2024-12-14 06:54:08.623043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.623092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.634663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190eff18 00:23:54.873 [2024-12-14 06:54:08.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.635653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.646339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de8a8 00:23:54.873 [2024-12-14 06:54:08.646821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.646858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.656706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f35f0 00:23:54.873 [2024-12-14 06:54:08.657123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.657160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.665895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ff3c8 00:23:54.873 [2024-12-14 06:54:08.666315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.666352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.675203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5a90 00:23:54.873 [2024-12-14 06:54:08.675534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.675565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.684259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fb048 00:23:54.873 [2024-12-14 06:54:08.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.684590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.694122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190edd58 00:23:54.873 [2024-12-14 06:54:08.694557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.694598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.705996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190eff18 00:23:54.873 [2024-12-14 06:54:08.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.706349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.719337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e88f8 00:23:54.873 [2024-12-14 06:54:08.720427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.720457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.726272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fef90 00:23:54.873 [2024-12-14 06:54:08.726592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.726624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.737721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190efae0 00:23:54.873 [2024-12-14 06:54:08.738612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.738647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.749545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e2c28 00:23:54.873 [2024-12-14 06:54:08.750744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.750775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.756534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fef90 00:23:54.873 [2024-12-14 06:54:08.756849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.873 [2024-12-14 06:54:08.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:54.873 [2024-12-14 06:54:08.769916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fa3a0 00:23:54.873 [2024-12-14 06:54:08.770957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.770993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.780928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f31b8 00:23:54.874 [2024-12-14 06:54:08.782625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.793111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f8618 00:23:54.874 [2024-12-14 06:54:08.793870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.793966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.804496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e2c28 00:23:54.874 [2024-12-14 06:54:08.805879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.805926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.816807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7538 00:23:54.874 [2024-12-14 06:54:08.817178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.817203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.828821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de038 00:23:54.874 [2024-12-14 06:54:08.829133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.829181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.839784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9f68 00:23:54.874 [2024-12-14 06:54:08.840470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.840504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.849503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190df988 00:23:54.874 [2024-12-14 06:54:08.849934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.849980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.874 [2024-12-14 06:54:08.860178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5220 00:23:54.874 [2024-12-14 06:54:08.860441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.874 [2024-12-14 06:54:08.860474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.871465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de8a8 00:23:55.133 [2024-12-14 06:54:08.872691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.872724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.881370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9b30 00:23:55.133 [2024-12-14 06:54:08.881809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.881842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.892333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9f68 00:23:55.133 [2024-12-14 06:54:08.893307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.893348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.904487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:55.133 [2024-12-14 06:54:08.905865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.905908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.916429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc128 00:23:55.133 [2024-12-14 06:54:08.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.916988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.928475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e4140 00:23:55.133 [2024-12-14 06:54:08.928940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.928988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.940401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9f68 00:23:55.133 [2024-12-14 06:54:08.940853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.940901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.952265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6738 00:23:55.133 [2024-12-14 06:54:08.952655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.952699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.963991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e1f80 00:23:55.133 [2024-12-14 06:54:08.964917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.964962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.975762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ecc78 00:23:55.133 [2024-12-14 06:54:08.976331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.976369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.987418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6fa8 00:23:55.133 [2024-12-14 06:54:08.987976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.988025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:08.999263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6300 00:23:55.133 [2024-12-14 06:54:08.999793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:08.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.011319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f0350 00:23:55.133 [2024-12-14 06:54:09.011859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:09.011908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.023312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6300 00:23:55.133 [2024-12-14 06:54:09.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:09.023848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.034773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6fa8 00:23:55.133 [2024-12-14 06:54:09.035194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:09.035232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.044945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e9168 00:23:55.133 [2024-12-14 06:54:09.045325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:09.045374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.056367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f2d80 00:23:55.133 [2024-12-14 06:54:09.057493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.133 [2024-12-14 06:54:09.057527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.133 [2024-12-14 06:54:09.066878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:55.133 [2024-12-14 06:54:09.067925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.067963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.134 [2024-12-14 06:54:09.078787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6300 00:23:55.134 [2024-12-14 06:54:09.080566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.080599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.134 [2024-12-14 06:54:09.089339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e88f8 00:23:55.134 [2024-12-14 06:54:09.090711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.090761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.134 [2024-12-14 06:54:09.098880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ec840 00:23:55.134 [2024-12-14 06:54:09.099372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.099416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.134 [2024-12-14 06:54:09.108374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e99d8 00:23:55.134 [2024-12-14 06:54:09.109970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.110014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.134 [2024-12-14 06:54:09.119140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190dfdc0 00:23:55.134 [2024-12-14 06:54:09.119759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.134 [2024-12-14 06:54:09.119792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.128574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fbcf0 00:23:55.393 [2024-12-14 06:54:09.128734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.128753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.141355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5220 00:23:55.393 [2024-12-14 06:54:09.142213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.142247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.151052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ee5c8 00:23:55.393 [2024-12-14 06:54:09.152421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.152467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.161716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fd208 00:23:55.393 [2024-12-14 06:54:09.162372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.162409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.174588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ecc78 00:23:55.393 [2024-12-14 06:54:09.175700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.175732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.181904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ebfd0 00:23:55.393 [2024-12-14 06:54:09.182249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.182283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.195680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7da8 00:23:55.393 [2024-12-14 06:54:09.196764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.196796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.206996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed0b0 00:23:55.393 [2024-12-14 06:54:09.208687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.219004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ef270 00:23:55.393 [2024-12-14 06:54:09.219627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.219675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.230526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e1f80 00:23:55.393 [2024-12-14 06:54:09.231144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.231179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.242133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5220 00:23:55.393 [2024-12-14 06:54:09.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.242787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.393 [2024-12-14 06:54:09.253605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:55.393 [2024-12-14 06:54:09.254958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.393 [2024-12-14 06:54:09.255003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.265277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e12d8 00:23:55.394 [2024-12-14 06:54:09.266190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.266226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.275350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f5378 00:23:55.394 [2024-12-14 06:54:09.275569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.289328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ea248 00:23:55.394 [2024-12-14 06:54:09.290314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.290369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.300446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc128 00:23:55.394 [2024-12-14 06:54:09.301809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.301844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.312376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ee190 00:23:55.394 [2024-12-14 06:54:09.312863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.312897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.323980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e9168 00:23:55.394 [2024-12-14 06:54:09.324469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.324504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.336095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e1b48 00:23:55.394 [2024-12-14 06:54:09.337009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.337057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.347425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f6890 00:23:55.394 [2024-12-14 06:54:09.348071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.348128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.358782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e7818 00:23:55.394 [2024-12-14 06:54:09.359350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.359384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.370197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fa3a0 00:23:55.394 [2024-12-14 06:54:09.370749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.370809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.394 [2024-12-14 06:54:09.381480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190dece0 00:23:55.394 [2024-12-14 06:54:09.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.394 [2024-12-14 06:54:09.382051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.392796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f8618 00:23:55.667 [2024-12-14 06:54:09.393305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.393340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.404196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f6890 00:23:55.667 [2024-12-14 06:54:09.404608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.415502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9b30 00:23:55.667 [2024-12-14 06:54:09.415859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.415907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.426562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ea248 00:23:55.667 [2024-12-14 06:54:09.426982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.427028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.440833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e9168 00:23:55.667 [2024-12-14 06:54:09.442216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.442253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.449446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f92c0 00:23:55.667 [2024-12-14 06:54:09.449805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.449843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.463123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e8088 00:23:55.667 [2024-12-14 06:54:09.464142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.464186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.473322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f81e0 00:23:55.667 [2024-12-14 06:54:09.474505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.474556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.484574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc560 00:23:55.667 [2024-12-14 06:54:09.485908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.485971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.496256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f4b08 00:23:55.667 [2024-12-14 06:54:09.496639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.508280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f2948 00:23:55.667 [2024-12-14 06:54:09.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.509039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.519746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ee5c8 00:23:55.667 [2024-12-14 06:54:09.520359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.520433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.531133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e12d8 00:23:55.667 [2024-12-14 06:54:09.531658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.531730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.542567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f35f0 00:23:55.667 [2024-12-14 06:54:09.543008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.667 [2024-12-14 06:54:09.543053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.667 [2024-12-14 06:54:09.553883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e73e0 00:23:55.668 [2024-12-14 06:54:09.554391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.554430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.565114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e73e0 00:23:55.668 [2024-12-14 06:54:09.565828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.565861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.576599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fe2e8 00:23:55.668 [2024-12-14 06:54:09.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.577271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.588070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e12d8 00:23:55.668 [2024-12-14 06:54:09.588736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.588770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.599324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e0ea0 00:23:55.668 [2024-12-14 06:54:09.599913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.599971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.608520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e23b8 00:23:55.668 [2024-12-14 06:54:09.609059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.609106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.618338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de8a8 00:23:55.668 [2024-12-14 06:54:09.619010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.630231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190eff18 00:23:55.668 [2024-12-14 06:54:09.631506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.631536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.668 [2024-12-14 06:54:09.639495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fef90 00:23:55.668 [2024-12-14 06:54:09.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.668 [2024-12-14 06:54:09.640971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.649788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e23b8 00:23:55.941 [2024-12-14 06:54:09.651139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.659264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190de8a8 00:23:55.941 [2024-12-14 06:54:09.660462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.660493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.669160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fd640 00:23:55.941 [2024-12-14 06:54:09.670703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.670742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.678902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e99d8 00:23:55.941 [2024-12-14 06:54:09.680060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.680090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.688097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e88f8 00:23:55.941 [2024-12-14 06:54:09.689537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.689568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.697485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f92c0 00:23:55.941 [2024-12-14 06:54:09.698793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.698826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.707170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fe2e8 00:23:55.941 [2024-12-14 06:54:09.707747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.707778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.719333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e5658 00:23:55.941 [2024-12-14 06:54:09.720737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.720770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.726820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e0630 00:23:55.941 [2024-12-14 06:54:09.727169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.727202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.736163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f57b0 00:23:55.941 [2024-12-14 06:54:09.736580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.736614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.746453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fd640 00:23:55.941 [2024-12-14 06:54:09.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.747606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.756321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e84c0 00:23:55.941 [2024-12-14 06:54:09.757328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.757376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.766985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f2510 00:23:55.941 [2024-12-14 06:54:09.768026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.768066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.776570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e6300 00:23:55.941 [2024-12-14 06:54:09.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.785782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f31b8 00:23:55.941 [2024-12-14 06:54:09.787199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.787231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.795544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e23b8 00:23:55.941 [2024-12-14 06:54:09.797119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.797165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.807282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e0a68 00:23:55.941 [2024-12-14 06:54:09.808294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.808325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.814442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fac10 00:23:55.941 [2024-12-14 06:54:09.814597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.814616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.825012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7970 00:23:55.941 [2024-12-14 06:54:09.825404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.825469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.835446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f6458 00:23:55.941 [2024-12-14 06:54:09.835714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.835733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.844805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e27f0 00:23:55.941 [2024-12-14 06:54:09.845277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.941 [2024-12-14 06:54:09.845312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.941 [2024-12-14 06:54:09.854595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fb048 00:23:55.942 [2024-12-14 06:54:09.855647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.855682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.866891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fb048 00:23:55.942 [2024-12-14 06:54:09.867825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.867855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.875866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:55.942 [2024-12-14 06:54:09.877118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.877149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.885264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e49b0 00:23:55.942 [2024-12-14 06:54:09.885838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.885883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.897446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e88f8 00:23:55.942 [2024-12-14 06:54:09.898562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.898595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.904396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e23b8 00:23:55.942 [2024-12-14 06:54:09.904583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.904603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.914305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f7970 00:23:55.942 [2024-12-14 06:54:09.914941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.914981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.942 [2024-12-14 06:54:09.924852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e49b0 00:23:55.942 [2024-12-14 06:54:09.925283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.942 [2024-12-14 06:54:09.925319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.936190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:56.201 [2024-12-14 06:54:09.936556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.936587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.947305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e9168 00:23:56.201 [2024-12-14 06:54:09.947601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.947635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.956482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e7c50 00:23:56.201 [2024-12-14 06:54:09.956732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.956751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.965991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e9168 00:23:56.201 [2024-12-14 06:54:09.966258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.966280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.975121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190ed4e8 00:23:56.201 [2024-12-14 06:54:09.975319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.975338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.985793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e49b0 00:23:56.201 [2024-12-14 06:54:09.987813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.987862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:09.997551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190e8088 00:23:56.201 [2024-12-14 06:54:09.998682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:09.998715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:10.005938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190eb760 00:23:56.201 [2024-12-14 06:54:10.006163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:10.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:10.018379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190f9b30 00:23:56.201 [2024-12-14 06:54:10.018736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:10.018773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:10.029838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190df988 00:23:56.201 [2024-12-14 06:54:10.031097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:10.031138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.201 [2024-12-14 06:54:10.043955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d48f0) with pdu=0x2000190fc560 00:23:56.201 [2024-12-14 06:54:10.045201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.201 [2024-12-14 06:54:10.045236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:56.201 00:23:56.201 Latency(us) 00:23:56.201 [2024-12-14T06:54:10.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.201 [2024-12-14T06:54:10.193Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:56.201 nvme0n1 : 2.00 23311.29 91.06 0.00 0.00 5485.55 1884.16 15073.28 00:23:56.201 [2024-12-14T06:54:10.193Z] =================================================================================================================== 00:23:56.201 [2024-12-14T06:54:10.193Z] Total : 23311.29 91.06 0.00 0.00 5485.55 1884.16 15073.28 00:23:56.201 0 00:23:56.201 06:54:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:56.201 06:54:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:56.201 06:54:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:56.201 06:54:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:56.201 | .driver_specific 00:23:56.201 | .nvme_error 00:23:56.201 | .status_code 00:23:56.201 | .command_transient_transport_error' 00:23:56.459 06:54:10 -- host/digest.sh@71 -- # (( 182 > 0 )) 00:23:56.459 06:54:10 -- host/digest.sh@73 -- # killprocess 87489 00:23:56.459 06:54:10 -- common/autotest_common.sh@936 -- # '[' -z 87489 ']' 00:23:56.459 06:54:10 -- common/autotest_common.sh@940 -- # kill -0 87489 00:23:56.459 06:54:10 -- common/autotest_common.sh@941 -- # uname 00:23:56.459 06:54:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.459 06:54:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87489 00:23:56.459 06:54:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:56.459 06:54:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:56.459 killing process with pid 87489 00:23:56.459 06:54:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87489' 00:23:56.459 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.459 00:23:56.459 Latency(us) 00:23:56.459 [2024-12-14T06:54:10.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.460 [2024-12-14T06:54:10.452Z] =================================================================================================================== 00:23:56.460 [2024-12-14T06:54:10.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.460 06:54:10 -- common/autotest_common.sh@955 -- # kill 87489 00:23:56.460 06:54:10 -- common/autotest_common.sh@960 -- # wait 87489 00:23:57.027 06:54:10 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:57.027 06:54:10 -- host/digest.sh@54 -- # local rw bs qd 00:23:57.027 06:54:10 -- host/digest.sh@56 -- # rw=randwrite 00:23:57.027 06:54:10 -- host/digest.sh@56 -- # bs=131072 00:23:57.027 06:54:10 -- host/digest.sh@56 -- # qd=16 00:23:57.027 06:54:10 -- host/digest.sh@58 -- # bperfpid=87580 00:23:57.027 06:54:10 -- host/digest.sh@60 -- # waitforlisten 87580 /var/tmp/bperf.sock 00:23:57.027 06:54:10 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:57.027 06:54:10 -- common/autotest_common.sh@829 -- # '[' -z 87580 ']' 00:23:57.027 06:54:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.027 06:54:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.027 06:54:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.027 06:54:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.027 06:54:10 -- common/autotest_common.sh@10 -- # set +x 00:23:57.027 [2024-12-14 06:54:10.787879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.027 [2024-12-14 06:54:10.788009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87580 ] 00:23:57.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.027 Zero copy mechanism will not be used. 00:23:57.027 [2024-12-14 06:54:10.923458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.286 [2024-12-14 06:54:11.031449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.852 06:54:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.852 06:54:11 -- common/autotest_common.sh@862 -- # return 0 00:23:57.852 06:54:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.852 06:54:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:58.110 06:54:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:58.110 06:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.111 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:23:58.111 06:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.111 06:54:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.111 06:54:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.369 nvme0n1 00:23:58.369 06:54:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:58.369 06:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.369 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:23:58.369 06:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.369 06:54:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:58.369 06:54:12 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.628 Zero copy mechanism will not be used. 00:23:58.628 Running I/O for 2 seconds... 00:23:58.628 [2024-12-14 06:54:12.474273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.628 [2024-12-14 06:54:12.474605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.628 [2024-12-14 06:54:12.474655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.628 [2024-12-14 06:54:12.479302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.628 [2024-12-14 06:54:12.479524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.628 [2024-12-14 06:54:12.479561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.628 [2024-12-14 06:54:12.484012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.484223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.484248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.488639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.488779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.488818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.493130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.493245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.493268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.497658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.497770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.497793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.502456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.502650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.502675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.507222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.507499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.507523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.511701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.511933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.511956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.516205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.516398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.516423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.520100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.520239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.520261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.523856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.523972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.527745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.527862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.527882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.531638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.531763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.531784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.535796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.535919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.535940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.540486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.540715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.540739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.545011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.545365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.545408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.549581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.549735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.549758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.554315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.554448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.554472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.559075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.559261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.563769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.563948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.563970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.568342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.568482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.568503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.572132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.572262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.572285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.576066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.576293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.576317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.579950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.580139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.580160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.583780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.583904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.583925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.587619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.587764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.587786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.592407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.592535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.592558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.596460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.629 [2024-12-14 06:54:12.596555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.629 [2024-12-14 06:54:12.596577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.629 [2024-12-14 06:54:12.600190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.630 [2024-12-14 06:54:12.600313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.630 [2024-12-14 06:54:12.600335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.630 [2024-12-14 06:54:12.604017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.630 [2024-12-14 06:54:12.604143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.630 [2024-12-14 06:54:12.604164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.630 [2024-12-14 06:54:12.607921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.630 [2024-12-14 06:54:12.608138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.630 [2024-12-14 06:54:12.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.630 [2024-12-14 06:54:12.611811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.630 [2024-12-14 06:54:12.612019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.630 [2024-12-14 06:54:12.612043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.630 [2024-12-14 06:54:12.616358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.630 [2024-12-14 06:54:12.616472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.630 [2024-12-14 06:54:12.616496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.620630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.620742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.890 [2024-12-14 06:54:12.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.625243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.625368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.890 [2024-12-14 06:54:12.625391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.629541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.629645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.890 [2024-12-14 06:54:12.629699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.633810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.633936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.890 [2024-12-14 06:54:12.633973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.638285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.638438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.890 [2024-12-14 06:54:12.638460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.890 [2024-12-14 06:54:12.642652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.890 [2024-12-14 06:54:12.642927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.642948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.646904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.647160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.647183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.651312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.651513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.651535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.655508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.655639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.655661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.659340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.659460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.659481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.663086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.663185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.666876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.667001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.667036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.670797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.670928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.670949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.674701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.674905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.674925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.679126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.679434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.679465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.683306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.683493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.687450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.687598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.687619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.691531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.691688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.691711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.696096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.696225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.696246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.700409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.700595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.700618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.704858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.705013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.705036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.709364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.709666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.713921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.714237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.714272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.718403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.718628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.722850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.722988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.723010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.726767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.726864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.726885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.730576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.730720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.734415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.734528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.734565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.738465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.738641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.738663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.742426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.742650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.742694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.746659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.746952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.746976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.751032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.751158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.751180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.891 [2024-12-14 06:54:12.755544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.891 [2024-12-14 06:54:12.755674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.891 [2024-12-14 06:54:12.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.759896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.760047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.760070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.764358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.764514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.764539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.768806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.768953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.768978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.773174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.773372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.773395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.777789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.778026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.778061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.782043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.782326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.782348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.786413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.786602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.786641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.790866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.790975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.790997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.794884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.795079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.795100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.799183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.799301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.799322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.803503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.803647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.803684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.807943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.808153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.812270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.812504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.812555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.816772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.817036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.817073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.821169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.821282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.821319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.825409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.825531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.830704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.830822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.830843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.834342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.834460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.834496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.837977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.838112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.838133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.841625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.841751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.841771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.845309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.845536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.845556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.848880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.849095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.849116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.852911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.853058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.853081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.856651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.856755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.856776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.860259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.863789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.863903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.863924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.867545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.892 [2024-12-14 06:54:12.867666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.892 [2024-12-14 06:54:12.867688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.892 [2024-12-14 06:54:12.871223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.893 [2024-12-14 06:54:12.871349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.893 [2024-12-14 06:54:12.871369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.893 [2024-12-14 06:54:12.875505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.893 [2024-12-14 06:54:12.875746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.893 [2024-12-14 06:54:12.875768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.893 [2024-12-14 06:54:12.879773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:58.893 [2024-12-14 06:54:12.880031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.893 [2024-12-14 06:54:12.880091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.884294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.884413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.884437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.888921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.889088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.889167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.893650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.893764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.893787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.898009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.898193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.898216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.902607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.902777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.902800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.906857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.906984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.907005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.910731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.910941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.910963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.914500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.914732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.914753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.918185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.918347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.921911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.922036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.922057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.925693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.925802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.925823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.930196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.930321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.934610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.934778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.152 [2024-12-14 06:54:12.934800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.152 [2024-12-14 06:54:12.938396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.152 [2024-12-14 06:54:12.938572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.938594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.942187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.942394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.942415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.945809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.946051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.946073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.949649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.949754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.949774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.953453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.953550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.953572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.957749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.957851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.957883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.961661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.961762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.961783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.965747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.965928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.965950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.969607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.969757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.969778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.973541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.973741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.977312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.977576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.981557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.981747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.981770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.985856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.985966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.985988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.989608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.989724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.993476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.993585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.993605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:12.997207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:12.997358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:12.997378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.001024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.001172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.001193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.004784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.005027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.005049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.009229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.009441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.009463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.013052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.013221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.013243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.016805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.020761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.020884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.024542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.024637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.024657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.028241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.028392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.028413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.032324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.032525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.032551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.036771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.036975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.037010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.153 [2024-12-14 06:54:13.040434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.153 [2024-12-14 06:54:13.040652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.153 [2024-12-14 06:54:13.040673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.044139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.044308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.044330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.047851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.047958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.047992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.051712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.051821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.051845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.055904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.056066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.056089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.060400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.060601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.060624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.064688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.064846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.064871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.069357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.069577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.069601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.073819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.074121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.074196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.078372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.078474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.078515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.083012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.083170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.083193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.087825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.087935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.087972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.092252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.092359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.092382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.096559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.096753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.096775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.100956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.101186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.101208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.105353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.105663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.105698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.109618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.109863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.114135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.114328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.114351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.118449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.118620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.118642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.122863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.122976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.122997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.127171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.127286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.127308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.131745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.131922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.131971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.136348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.136500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.136522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.154 [2024-12-14 06:54:13.140796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.154 [2024-12-14 06:54:13.141042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.154 [2024-12-14 06:54:13.141079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.145141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.145371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.149174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.149347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.149368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.152844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.152965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.156471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.156596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.160035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.160135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.160155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.163698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.163850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.163871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.167378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.167581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.414 [2024-12-14 06:54:13.167602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.414 [2024-12-14 06:54:13.171960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.414 [2024-12-14 06:54:13.172261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.172314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.176480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.176749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.176796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.180591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.180757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.180778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.184777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.184940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.188833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.188926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.188947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.192938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.193066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.193100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.197290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.197485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.197506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.201380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.201528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.201566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.205679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.205894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.205915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.209867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.210164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.210187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.214152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.214326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.214358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.218239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.218326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.218361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.222186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.222288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.222310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.226171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.226268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.226290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.230105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.230295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.230317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.234028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.234211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.234232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.237894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.238227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.238268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.242257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.242521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.242563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.246345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.246451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.246473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.250676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.250822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.250844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.255286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.255428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.259702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.259861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.259889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.264230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.264450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.264484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.268868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.269039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.269063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.273000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.273200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.273221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.276681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.276863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.276884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.280296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.280469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.280490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.283919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.284026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.415 [2024-12-14 06:54:13.284047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.415 [2024-12-14 06:54:13.287551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.415 [2024-12-14 06:54:13.287654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.287674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.291410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.291500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.291523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.295845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.296008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.296064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.300407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.300616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.300639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.304905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.305174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.305207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.309581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.309874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.309914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.314007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.314188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.314211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.317730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.317837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.317858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.321326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.321432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.321453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.324976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.325100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.325121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.328591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.328755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.332180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.332310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.332330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.336714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.336944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.336966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.340672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.340912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.340955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.344311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.344437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.347983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.348058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.348079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.351509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.351608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.351629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.355184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.355283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.355304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.358991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.359196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.359218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.363174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.363320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.363340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.366879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.367122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.367145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.370550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.370750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.370770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.374128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.374317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.374338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.377698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.377798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.377818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.381388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.381499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.381522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.385572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.385664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.385685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.389231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.389388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.389408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.392854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.416 [2024-12-14 06:54:13.393035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.416 [2024-12-14 06:54:13.393056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.416 [2024-12-14 06:54:13.396649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.417 [2024-12-14 06:54:13.396845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.417 [2024-12-14 06:54:13.396865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.417 [2024-12-14 06:54:13.400296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.417 [2024-12-14 06:54:13.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.417 [2024-12-14 06:54:13.400553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.403888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.404085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.404106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.408229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.408353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.408375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.412040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.412157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.412177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.415737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.415830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.415850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.419489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.419640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.419661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.423188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.423339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.423359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.426926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.427125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.427145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.430803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.431028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.431074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.435073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.435254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.435275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.439071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.439173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.439193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.442726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.442860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.446283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.446380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.446402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.449774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.449939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.453362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.453524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.453544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.457062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.457259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.457281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.461134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.461378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.461411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.465269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.465403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.465423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.469155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.469238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.469259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.472784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.472875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.472896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.476465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.476556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.476577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.480164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.480297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.480318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.483762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.483904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.483925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.487774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.488011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.488033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.491900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.492162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.492190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.496370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.496557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.496579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.500734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.500875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.677 [2024-12-14 06:54:13.500897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.677 [2024-12-14 06:54:13.504963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.677 [2024-12-14 06:54:13.505111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.509455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.509615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.509637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.513721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.513918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.518073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.518272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.518301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.522534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.522757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.522779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.527057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.527302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.527324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.531111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.531289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.534817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.534920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.534941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.538455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.538604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.542076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.542194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.542215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.545721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.545869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.549342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.549488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.553754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.554009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.554030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.557810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.558031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.558052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.561438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.561584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.561605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.565124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.565226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.565247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.568816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.568928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.568948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.572482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.572590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.572611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.576188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.576346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.576370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.580243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.580374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.580395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.583982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.584186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.587515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.587708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.587728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.591139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.591309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.591330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.594754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.594861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.594881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.598445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.598555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.598576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.602644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.602743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.602765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.606433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.606641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.606661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.610023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.678 [2024-12-14 06:54:13.610185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.678 [2024-12-14 06:54:13.610206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.678 [2024-12-14 06:54:13.613753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.613948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.613968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.617438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.617663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.617683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.621074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.621246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.621267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.625078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.625231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.625256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.629169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.629265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.629287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.632817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.632910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.632931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.636527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.636693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.640077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.640207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.640227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.643735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.643929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.643950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.647354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.647624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.651117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.651258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.651296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.655444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.655551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.655573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.659704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.659844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.679 [2024-12-14 06:54:13.664114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.679 [2024-12-14 06:54:13.664242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.679 [2024-12-14 06:54:13.664264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.668342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.668516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.668537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.672118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.672248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.672268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.675816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.676025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.676046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.679394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.679618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.679638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.682927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.683123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.683144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.686570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.686686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.690229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.690343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.690365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.694511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.694637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.694660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.698437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.698612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.698634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.702592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.702770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.702792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.706890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.707167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.707199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.710700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.710878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.710900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.714460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.714665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.714686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.718222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.718313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.718334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.721818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.721911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.725551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.725644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.725664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.729594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.729779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.733650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.733842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.733864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.737738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.741403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.741661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.741682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.745205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.745372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.745392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.748836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.748932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.748952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.752517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.752693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.756109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.756258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.760496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.760657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.764740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.939 [2024-12-14 06:54:13.764893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.939 [2024-12-14 06:54:13.764914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.939 [2024-12-14 06:54:13.768803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.769011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.769032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.772416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.772615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.772635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.776020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.776190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.776210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.779643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.779764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.783315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.783392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.783413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.787066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.787199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.791479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.791637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.791659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.795705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.795875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.795896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.799567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.799771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.799792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.803348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.803593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.803652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.807356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.807457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.807478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.811242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.811356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.811376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.815048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.815144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.819611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.819769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.823710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.823859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.823880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.827656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.827800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.827820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.831597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.831799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.831819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.835368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.835616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.835648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.839176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.839291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.839312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.843385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.843514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.843538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.847722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.847831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.847853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.851977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.852117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.852139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.856296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.856504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.856528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.860702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.860911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.860934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.865370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.865603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.865643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.869841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.870094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.874361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.874516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.874540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.878854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.879005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.883215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.883345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.883369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.887542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.887657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.891905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.892113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.892135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.896153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.940 [2024-12-14 06:54:13.896334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.940 [2024-12-14 06:54:13.896356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.940 [2024-12-14 06:54:13.900755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.900989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.901012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.905378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.905622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.905653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.909795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.910015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.910037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.914031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.914188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.914210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.918303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.918411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.918448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.922658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.922776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.922798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.941 [2024-12-14 06:54:13.926870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:23:59.941 [2024-12-14 06:54:13.927031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.941 [2024-12-14 06:54:13.927053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.931233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.931432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.931461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.935693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.935953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.935977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.940255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.940559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.940593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.944767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.949291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.949397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.949421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.953732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.958165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.958315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.958339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.962946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.963161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.963184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.967494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.967639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.971453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.971652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.971676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.975188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.975382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.975403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.978934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.979120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.979141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.982642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.982739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.982760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.986351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.986452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.990734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.990866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.990889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.995160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.995365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.995388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:13.999662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:13.999808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:13.999831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:14.004249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:14.004487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:14.004526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:14.008713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:14.009061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:14.009106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:14.013134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:14.013271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:14.013295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:14.017696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.201 [2024-12-14 06:54:14.017820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.201 [2024-12-14 06:54:14.017842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.201 [2024-12-14 06:54:14.022255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.022370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.022393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.026628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.026726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.026748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.031258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.031417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.031440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.035715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.035928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.035950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.040434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.040686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.040729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.044949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.045252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.045280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.049209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.049314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.049337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.053689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.053782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.053819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.057734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.057829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.057850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.061667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.061797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.065648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.065793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.065816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.069429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.069556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.069577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.073236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.073433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.073454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.077741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.078049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.078072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.082404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.082612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.082651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.086684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.086779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.086800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.090759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.090869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.090890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.094741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.094851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.094872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.098673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.098828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.098848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.102840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.102982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.107729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.107975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.107997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.112305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.112600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.112629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.116690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.116874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.116897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.121168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.121285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.121307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.125790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.125940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.125964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.130356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.130466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.130489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.134776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.134937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.134960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.139243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.202 [2024-12-14 06:54:14.139415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.202 [2024-12-14 06:54:14.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.202 [2024-12-14 06:54:14.144005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.144247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.144275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.148722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.148979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.149019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.153276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.153471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.153502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.157833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.157970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.157993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.161788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.161942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.165742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.165842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.165865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.169759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.173585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.173714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.173736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.177468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.177716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.182246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.182519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.203 [2024-12-14 06:54:14.186712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.203 [2024-12-14 06:54:14.186814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.203 [2024-12-14 06:54:14.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.463 [2024-12-14 06:54:14.190915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.463 [2024-12-14 06:54:14.191018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.463 [2024-12-14 06:54:14.191052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.463 [2024-12-14 06:54:14.194710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.463 [2024-12-14 06:54:14.194813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.463 [2024-12-14 06:54:14.194834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.463 [2024-12-14 06:54:14.198620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.463 [2024-12-14 06:54:14.198711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.463 [2024-12-14 06:54:14.198731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.463 [2024-12-14 06:54:14.202587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.463 [2024-12-14 06:54:14.202749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.463 [2024-12-14 06:54:14.202770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.463 [2024-12-14 06:54:14.206455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.463 [2024-12-14 06:54:14.206625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.206648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.211028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.211253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.211284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.215511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.215770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.215799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.219854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.220030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.220053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.224003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.224107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.224127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.227845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.227978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.231858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.231977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.235922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.236087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.236114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.240463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.240637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.240661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.245032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.245303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.249636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.249849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.249872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.254159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.254333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.254357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.258779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.258909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.258932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.263508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.263626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.263663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.268262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.268427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.268451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.273037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.273185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.277714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.277902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.277926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.282337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.282570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.282624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.286974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.287298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.287328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.291447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.291607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.296218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.296373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.296396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.300654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.300789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.300811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.305218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.305350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.309968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.310250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.310280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.314671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.314909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.314933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.319343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.464 [2024-12-14 06:54:14.319597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.464 [2024-12-14 06:54:14.319620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.464 [2024-12-14 06:54:14.324008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.324298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.328199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.328302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.332146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.332225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.336068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.336166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.336187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.339766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.339856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.339875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.343510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.343662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.343682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.347724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.347861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.347882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.352268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.352508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.352539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.356726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.356973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.361386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.361603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.361626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.366060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.366397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.366421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.370796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.370921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.370959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.375182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.375320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.375357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.379816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.380003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.380027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.384399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.384564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.384586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.389021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.389238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.389281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.393246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.393473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.393501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.397606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.398024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.398049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.402408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.402511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.402533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.406935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.407091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.407145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.411414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.411514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.411536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.415979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.416181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.416203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.420543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.420699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.420721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.425444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.425722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.429767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.430052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.430088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.434617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.434800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.434838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.438756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.438857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.438892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.465 [2024-12-14 06:54:14.442561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.465 [2024-12-14 06:54:14.442666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.465 [2024-12-14 06:54:14.442686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.466 [2024-12-14 06:54:14.446299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.466 [2024-12-14 06:54:14.446420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.466 [2024-12-14 06:54:14.446442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.466 [2024-12-14 06:54:14.450064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.466 [2024-12-14 06:54:14.450229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.466 [2024-12-14 06:54:14.450251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.724 [2024-12-14 06:54:14.453697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.724 [2024-12-14 06:54:14.453958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.724 [2024-12-14 06:54:14.453980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.724 [2024-12-14 06:54:14.457915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.724 [2024-12-14 06:54:14.458295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.724 [2024-12-14 06:54:14.458328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.724 [2024-12-14 06:54:14.461889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.724 [2024-12-14 06:54:14.462189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.724 [2024-12-14 06:54:14.462211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.724 [2024-12-14 06:54:14.465847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d4a90) with pdu=0x2000190fef90 00:24:00.724 [2024-12-14 06:54:14.466166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.724 [2024-12-14 06:54:14.466187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.724 00:24:00.724 Latency(us) 00:24:00.724 [2024-12-14T06:54:14.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.724 [2024-12-14T06:54:14.716Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:00.724 nvme0n1 : 2.00 7469.83 933.73 0.00 0.00 2136.64 1541.59 6642.97 00:24:00.724 [2024-12-14T06:54:14.716Z] =================================================================================================================== 00:24:00.724 [2024-12-14T06:54:14.716Z] Total : 7469.83 933.73 0.00 0.00 2136.64 1541.59 6642.97 00:24:00.724 0 00:24:00.724 06:54:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:00.724 06:54:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:00.724 | .driver_specific 00:24:00.724 | .nvme_error 00:24:00.724 | .status_code 00:24:00.724 | .command_transient_transport_error' 00:24:00.724 06:54:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:00.724 06:54:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:00.983 06:54:14 -- host/digest.sh@71 -- # (( 482 > 0 )) 00:24:00.983 06:54:14 -- host/digest.sh@73 -- # killprocess 87580 00:24:00.983 06:54:14 -- common/autotest_common.sh@936 -- # '[' -z 87580 ']' 00:24:00.983 06:54:14 -- common/autotest_common.sh@940 -- # kill -0 87580 00:24:00.983 06:54:14 -- common/autotest_common.sh@941 -- # uname 00:24:00.983 06:54:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.983 06:54:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87580 00:24:00.983 killing process with pid 87580 00:24:00.983 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.983 00:24:00.983 Latency(us) 00:24:00.983 [2024-12-14T06:54:14.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.983 [2024-12-14T06:54:14.975Z] =================================================================================================================== 00:24:00.983 [2024-12-14T06:54:14.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.983 06:54:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:00.983 06:54:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:00.983 06:54:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87580' 00:24:00.983 06:54:14 -- common/autotest_common.sh@955 -- # kill 87580 00:24:00.983 06:54:14 -- common/autotest_common.sh@960 -- # wait 87580 00:24:01.243 06:54:15 -- host/digest.sh@115 -- # killprocess 87260 00:24:01.243 06:54:15 -- common/autotest_common.sh@936 -- # '[' -z 87260 ']' 00:24:01.243 06:54:15 -- common/autotest_common.sh@940 -- # kill -0 87260 00:24:01.243 06:54:15 -- common/autotest_common.sh@941 -- # uname 00:24:01.243 06:54:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.243 06:54:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87260 00:24:01.243 06:54:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.243 06:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.243 killing process with pid 87260 00:24:01.243 06:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87260' 00:24:01.243 06:54:15 -- common/autotest_common.sh@955 -- # kill 87260 00:24:01.243 06:54:15 -- common/autotest_common.sh@960 -- # wait 87260 00:24:01.811 ************************************ 00:24:01.811 END TEST nvmf_digest_error 00:24:01.811 ************************************ 00:24:01.811 00:24:01.811 real 0m19.419s 00:24:01.811 user 0m36.482s 00:24:01.811 sys 0m5.257s 00:24:01.811 06:54:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.811 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:01.811 06:54:15 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:24:01.811 06:54:15 -- host/digest.sh@139 -- # nvmftestfini 00:24:01.811 06:54:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.811 06:54:15 -- nvmf/common.sh@116 -- # sync 00:24:01.811 06:54:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:01.811 06:54:15 -- nvmf/common.sh@119 -- # set +e 00:24:01.811 06:54:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.811 06:54:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:01.811 rmmod nvme_tcp 00:24:01.811 rmmod nvme_fabrics 00:24:01.811 rmmod nvme_keyring 00:24:01.811 06:54:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.811 06:54:15 -- nvmf/common.sh@123 -- # set -e 00:24:01.811 06:54:15 -- nvmf/common.sh@124 -- # return 0 00:24:01.811 06:54:15 -- nvmf/common.sh@477 -- # '[' -n 87260 ']' 00:24:01.811 06:54:15 -- nvmf/common.sh@478 -- # killprocess 87260 00:24:01.811 06:54:15 -- common/autotest_common.sh@936 -- # '[' -z 87260 ']' 00:24:01.811 06:54:15 -- common/autotest_common.sh@940 -- # kill -0 87260 00:24:01.811 Process with pid 87260 is not found 00:24:01.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87260) - No such process 00:24:01.811 06:54:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87260 is not found' 00:24:01.811 06:54:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:01.811 06:54:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:01.812 06:54:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:01.812 06:54:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.812 06:54:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:01.812 06:54:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.812 06:54:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.812 06:54:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.812 06:54:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:01.812 00:24:01.812 real 0m39.809s 00:24:01.812 user 1m13.632s 00:24:01.812 sys 0m10.599s 00:24:01.812 ************************************ 00:24:01.812 END TEST nvmf_digest 00:24:01.812 ************************************ 00:24:01.812 06:54:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.812 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:02.070 06:54:15 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:24:02.070 06:54:15 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:24:02.070 06:54:15 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:02.070 06:54:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.070 06:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.070 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:24:02.070 ************************************ 00:24:02.071 START TEST nvmf_mdns_discovery 00:24:02.071 ************************************ 00:24:02.071 06:54:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:02.071 * Looking for test storage... 00:24:02.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:02.071 06:54:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.071 06:54:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.071 06:54:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.071 06:54:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.071 06:54:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.071 06:54:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.071 06:54:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.071 06:54:16 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.071 06:54:16 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.071 06:54:16 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.071 06:54:16 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.071 06:54:16 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.071 06:54:16 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.071 06:54:16 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.071 06:54:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.071 06:54:16 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.071 06:54:16 -- scripts/common.sh@344 -- # : 1 00:24:02.071 06:54:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.071 06:54:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.071 06:54:16 -- scripts/common.sh@364 -- # decimal 1 00:24:02.071 06:54:16 -- scripts/common.sh@352 -- # local d=1 00:24:02.071 06:54:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.071 06:54:16 -- scripts/common.sh@354 -- # echo 1 00:24:02.071 06:54:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.071 06:54:16 -- scripts/common.sh@365 -- # decimal 2 00:24:02.071 06:54:16 -- scripts/common.sh@352 -- # local d=2 00:24:02.071 06:54:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.071 06:54:16 -- scripts/common.sh@354 -- # echo 2 00:24:02.071 06:54:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.071 06:54:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.071 06:54:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.071 06:54:16 -- scripts/common.sh@367 -- # return 0 00:24:02.071 06:54:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.071 06:54:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.071 --rc genhtml_branch_coverage=1 00:24:02.071 --rc genhtml_function_coverage=1 00:24:02.071 --rc genhtml_legend=1 00:24:02.071 --rc geninfo_all_blocks=1 00:24:02.071 --rc geninfo_unexecuted_blocks=1 00:24:02.071 00:24:02.071 ' 00:24:02.071 06:54:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.071 --rc genhtml_branch_coverage=1 00:24:02.071 --rc genhtml_function_coverage=1 00:24:02.071 --rc genhtml_legend=1 00:24:02.071 --rc geninfo_all_blocks=1 00:24:02.071 --rc geninfo_unexecuted_blocks=1 00:24:02.071 00:24:02.071 ' 00:24:02.071 06:54:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.071 --rc genhtml_branch_coverage=1 00:24:02.071 --rc genhtml_function_coverage=1 00:24:02.071 --rc genhtml_legend=1 00:24:02.071 --rc geninfo_all_blocks=1 00:24:02.071 --rc geninfo_unexecuted_blocks=1 00:24:02.071 00:24:02.071 ' 00:24:02.071 06:54:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.071 --rc genhtml_branch_coverage=1 00:24:02.071 --rc genhtml_function_coverage=1 00:24:02.071 --rc genhtml_legend=1 00:24:02.071 --rc geninfo_all_blocks=1 00:24:02.071 --rc geninfo_unexecuted_blocks=1 00:24:02.071 00:24:02.071 ' 00:24:02.071 06:54:16 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.071 06:54:16 -- nvmf/common.sh@7 -- # uname -s 00:24:02.071 06:54:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.071 06:54:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.071 06:54:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.071 06:54:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.071 06:54:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.071 06:54:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.071 06:54:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.071 06:54:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.071 06:54:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.071 06:54:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.071 06:54:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:24:02.071 06:54:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:24:02.071 06:54:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.071 06:54:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.071 06:54:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.071 06:54:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.071 06:54:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.071 06:54:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.071 06:54:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.071 06:54:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.071 06:54:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.071 06:54:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.071 06:54:16 -- paths/export.sh@5 -- # export PATH 00:24:02.071 06:54:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.071 06:54:16 -- nvmf/common.sh@46 -- # : 0 00:24:02.071 06:54:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.071 06:54:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.071 06:54:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.071 06:54:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.071 06:54:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.071 06:54:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.071 06:54:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.071 06:54:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:24:02.329 06:54:16 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:24:02.329 06:54:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.329 06:54:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.329 06:54:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.329 06:54:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.329 06:54:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.329 06:54:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.329 06:54:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.329 06:54:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.330 06:54:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:02.330 06:54:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:02.330 06:54:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:02.330 06:54:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:02.330 06:54:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:02.330 06:54:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:02.330 06:54:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.330 06:54:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.330 06:54:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:02.330 06:54:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:02.330 06:54:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.330 06:54:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.330 06:54:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.330 06:54:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.330 06:54:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.330 06:54:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.330 06:54:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.330 06:54:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.330 06:54:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:02.330 06:54:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:02.330 Cannot find device "nvmf_tgt_br" 00:24:02.330 06:54:16 -- nvmf/common.sh@154 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.330 Cannot find device "nvmf_tgt_br2" 00:24:02.330 06:54:16 -- nvmf/common.sh@155 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:02.330 06:54:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:02.330 Cannot find device "nvmf_tgt_br" 00:24:02.330 06:54:16 -- nvmf/common.sh@157 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:02.330 Cannot find device "nvmf_tgt_br2" 00:24:02.330 06:54:16 -- nvmf/common.sh@158 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:02.330 06:54:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:02.330 06:54:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.330 06:54:16 -- nvmf/common.sh@161 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.330 06:54:16 -- nvmf/common.sh@162 -- # true 00:24:02.330 06:54:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.330 06:54:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.330 06:54:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.330 06:54:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.330 06:54:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.330 06:54:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.330 06:54:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.330 06:54:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:02.330 06:54:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:02.330 06:54:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:02.330 06:54:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:02.330 06:54:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:02.330 06:54:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:02.330 06:54:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.330 06:54:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.330 06:54:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.330 06:54:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:02.330 06:54:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:02.330 06:54:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.589 06:54:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.589 06:54:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.589 06:54:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.589 06:54:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.589 06:54:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:02.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:24:02.589 00:24:02.589 --- 10.0.0.2 ping statistics --- 00:24:02.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.589 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:02.589 06:54:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:02.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:02.589 00:24:02.589 --- 10.0.0.3 ping statistics --- 00:24:02.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.589 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:02.589 06:54:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:02.589 00:24:02.589 --- 10.0.0.1 ping statistics --- 00:24:02.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.589 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:02.589 06:54:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.589 06:54:16 -- nvmf/common.sh@421 -- # return 0 00:24:02.589 06:54:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:02.589 06:54:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.589 06:54:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:02.589 06:54:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:02.589 06:54:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.589 06:54:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:02.589 06:54:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:02.589 06:54:16 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:02.589 06:54:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:02.589 06:54:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.589 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:24:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.589 06:54:16 -- nvmf/common.sh@469 -- # nvmfpid=87893 00:24:02.589 06:54:16 -- nvmf/common.sh@470 -- # waitforlisten 87893 00:24:02.589 06:54:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:02.589 06:54:16 -- common/autotest_common.sh@829 -- # '[' -z 87893 ']' 00:24:02.589 06:54:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.589 06:54:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.589 06:54:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.589 06:54:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.589 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:24:02.589 [2024-12-14 06:54:16.475338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:02.589 [2024-12-14 06:54:16.475425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.848 [2024-12-14 06:54:16.612974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.848 [2024-12-14 06:54:16.777763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:02.848 [2024-12-14 06:54:16.778061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.848 [2024-12-14 06:54:16.778080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.848 [2024-12-14 06:54:16.778092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.848 [2024-12-14 06:54:16.778157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.783 06:54:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.783 06:54:17 -- common/autotest_common.sh@862 -- # return 0 00:24:03.783 06:54:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:03.783 06:54:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 06:54:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.783 06:54:17 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:24:03.783 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.783 06:54:17 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:24:03.783 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.783 06:54:17 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.783 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 [2024-12-14 06:54:17.750170] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.783 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.783 06:54:17 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:03.783 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 [2024-12-14 06:54:17.762321] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:03.783 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.783 06:54:17 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:03.783 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.783 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 null0 00:24:04.042 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:04.042 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.042 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.042 null1 00:24:04.042 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:24:04.042 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.042 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.042 null2 00:24:04.042 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:24:04.042 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.042 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.042 null3 00:24:04.042 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:24:04.042 06:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.042 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.042 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:04.042 06:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@47 -- # hostpid=87944 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@48 -- # waitforlisten 87944 /tmp/host.sock 00:24:04.042 06:54:17 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:04.042 06:54:17 -- common/autotest_common.sh@829 -- # '[' -z 87944 ']' 00:24:04.042 06:54:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:04.042 06:54:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.042 06:54:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:04.042 06:54:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.042 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.042 [2024-12-14 06:54:17.877321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:04.042 [2024-12-14 06:54:17.877610] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87944 ] 00:24:04.042 [2024-12-14 06:54:18.015068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.302 [2024-12-14 06:54:18.166384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:04.302 [2024-12-14 06:54:18.166953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.239 06:54:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.239 06:54:18 -- common/autotest_common.sh@862 -- # return 0 00:24:05.239 06:54:18 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:24:05.239 06:54:18 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:24:05.239 06:54:18 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:24:05.239 06:54:19 -- host/mdns_discovery.sh@57 -- # avahipid=87973 00:24:05.239 06:54:19 -- host/mdns_discovery.sh@58 -- # sleep 1 00:24:05.239 06:54:19 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:24:05.239 06:54:19 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:24:05.239 Process 1058 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:24:05.239 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:24:05.239 Successfully dropped root privileges. 00:24:05.239 avahi-daemon 0.8 starting up. 00:24:05.239 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:24:05.239 Successfully called chroot(). 00:24:05.239 Successfully dropped remaining capabilities. 00:24:06.175 No service file found in /etc/avahi/services. 00:24:06.175 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:06.175 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:24:06.175 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:06.175 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:24:06.175 Network interface enumeration completed. 00:24:06.175 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:24:06.175 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:24:06.175 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:24:06.175 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:24:06.175 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2975270958. 00:24:06.175 06:54:20 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:06.175 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:06.176 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:06.176 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # xargs 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # sort 00:24:06.176 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@64 -- # sort 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@64 -- # xargs 00:24:06.176 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:06.176 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.176 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.176 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:06.176 06:54:20 -- host/mdns_discovery.sh@68 -- # xargs 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@68 -- # sort 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # sort 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # xargs 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@68 -- # xargs 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@68 -- # sort 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 [2024-12-14 06:54:20.328448] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # sort 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@64 -- # xargs 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 [2024-12-14 06:54:20.407673] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.435 06:54:20 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:24:06.435 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.435 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.694 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:24:06.694 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.694 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.694 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:24:06.694 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.694 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.694 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:06.694 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.694 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.694 [2024-12-14 06:54:20.447591] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:06.694 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:06.694 06:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.694 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:06.694 [2024-12-14 06:54:20.455588] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.694 06:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88031 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@125 -- # sleep 5 00:24:06.694 06:54:20 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:24:07.262 [2024-12-14 06:54:21.228449] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:07.520 Established under name 'CDC' 00:24:07.779 [2024-12-14 06:54:21.628461] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:07.779 [2024-12-14 06:54:21.628521] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:07.779 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:07.779 cookie is 0 00:24:07.779 is_local: 1 00:24:07.779 our_own: 0 00:24:07.779 wide_area: 0 00:24:07.779 multicast: 1 00:24:07.779 cached: 1 00:24:07.780 [2024-12-14 06:54:21.728461] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:07.780 [2024-12-14 06:54:21.728487] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:24:07.780 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:07.780 cookie is 0 00:24:07.780 is_local: 1 00:24:07.780 our_own: 0 00:24:07.780 wide_area: 0 00:24:07.780 multicast: 1 00:24:07.780 cached: 1 00:24:08.716 [2024-12-14 06:54:22.637395] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:08.716 [2024-12-14 06:54:22.637450] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:08.716 [2024-12-14 06:54:22.637469] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:08.976 [2024-12-14 06:54:22.723600] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:24:08.976 [2024-12-14 06:54:22.736960] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:08.976 [2024-12-14 06:54:22.737295] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:08.976 [2024-12-14 06:54:22.737368] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:08.976 [2024-12-14 06:54:22.787294] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:08.976 [2024-12-14 06:54:22.787320] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:08.976 [2024-12-14 06:54:22.823567] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:24:08.976 [2024-12-14 06:54:22.879256] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:08.976 [2024-12-14 06:54:22.879452] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.508 06:54:25 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:24:11.508 06:54:25 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:11.508 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.508 06:54:25 -- host/mdns_discovery.sh@80 -- # sort 00:24:11.508 06:54:25 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:11.508 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.508 06:54:25 -- host/mdns_discovery.sh@80 -- # xargs 00:24:11.508 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:11.766 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@76 -- # sort 00:24:11.766 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@76 -- # xargs 00:24:11.766 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@68 -- # sort 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@68 -- # xargs 00:24:11.766 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.766 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@64 -- # sort 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@64 -- # xargs 00:24:11.766 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.766 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.766 06:54:25 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:11.767 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.767 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # xargs 00:24:11.767 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:11.767 06:54:25 -- host/mdns_discovery.sh@72 -- # xargs 00:24:11.767 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.767 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:12.025 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:12.025 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.025 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:12.025 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:12.025 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.025 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:12.025 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:24:12.025 06:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.025 06:54:25 -- common/autotest_common.sh@10 -- # set +x 00:24:12.025 06:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.025 06:54:25 -- host/mdns_discovery.sh@139 -- # sleep 1 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:12.961 06:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@64 -- # sort 00:24:12.961 06:54:26 -- common/autotest_common.sh@10 -- # set +x 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@64 -- # xargs 00:24:12.961 06:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:12.961 06:54:26 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:12.961 06:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.961 06:54:26 -- common/autotest_common.sh@10 -- # set +x 00:24:13.219 06:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.219 06:54:26 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:24:13.219 06:54:26 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:13.219 06:54:26 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:24:13.219 06:54:26 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:13.219 06:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.219 06:54:26 -- common/autotest_common.sh@10 -- # set +x 00:24:13.219 [2024-12-14 06:54:26.996986] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.219 [2024-12-14 06:54:26.997767] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:13.219 [2024-12-14 06:54:26.997796] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:13.219 [2024-12-14 06:54:26.997841] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:13.219 [2024-12-14 06:54:26.997854] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:13.219 06:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.219 06:54:27 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:24:13.219 06:54:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.219 06:54:27 -- common/autotest_common.sh@10 -- # set +x 00:24:13.219 [2024-12-14 06:54:27.004785] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:13.219 [2024-12-14 06:54:27.005768] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:13.219 [2024-12-14 06:54:27.005864] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:13.219 06:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.219 06:54:27 -- host/mdns_discovery.sh@149 -- # sleep 1 00:24:13.219 [2024-12-14 06:54:27.136964] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:24:13.219 [2024-12-14 06:54:27.137293] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:24:13.219 [2024-12-14 06:54:27.200345] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:13.219 [2024-12-14 06:54:27.200367] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:13.219 [2024-12-14 06:54:27.200373] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.219 [2024-12-14 06:54:27.200389] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:13.219 [2024-12-14 06:54:27.200644] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:13.219 [2024-12-14 06:54:27.200652] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:13.219 [2024-12-14 06:54:27.200657] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:13.219 [2024-12-14 06:54:27.200669] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:13.476 [2024-12-14 06:54:27.247207] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:13.476 [2024-12-14 06:54:27.247240] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.476 [2024-12-14 06:54:27.247277] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:13.476 [2024-12-14 06:54:27.247285] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:14.042 06:54:28 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:24:14.042 06:54:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.042 06:54:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:14.042 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.042 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.042 06:54:28 -- host/mdns_discovery.sh@68 -- # sort 00:24:14.042 06:54:28 -- host/mdns_discovery.sh@68 -- # xargs 00:24:14.300 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:14.300 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.300 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@64 -- # sort 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@64 -- # xargs 00:24:14.300 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:14.300 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.300 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # xargs 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.300 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.300 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.300 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # xargs 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:14.300 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:14.300 06:54:28 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:14.300 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.300 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.300 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.560 06:54:28 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:24:14.560 06:54:28 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:14.560 06:54:28 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:24:14.560 06:54:28 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.560 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.560 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.560 [2024-12-14 06:54:28.318297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.560 [2024-12-14 06:54:28.318347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.560 [2024-12-14 06:54:28.318365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.560 [2024-12-14 06:54:28.318375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.560 [2024-12-14 06:54:28.318384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.560 [2024-12-14 06:54:28.318393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.560 [2024-12-14 06:54:28.318403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.560 [2024-12-14 06:54:28.318413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.560 [2024-12-14 06:54:28.318423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.560 [2024-12-14 06:54:28.318936] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:14.560 [2024-12-14 06:54:28.318974] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.560 [2024-12-14 06:54:28.319012] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:14.560 [2024-12-14 06:54:28.319026] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:14.560 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.561 06:54:28 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:14.561 06:54:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.561 06:54:28 -- common/autotest_common.sh@10 -- # set +x 00:24:14.561 [2024-12-14 06:54:28.325910] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:14.561 [2024-12-14 06:54:28.325977] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:14.561 [2024-12-14 06:54:28.328213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 06:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.561 06:54:28 -- host/mdns_discovery.sh@162 -- # sleep 1 00:24:14.561 [2024-12-14 06:54:28.335286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.561 [2024-12-14 06:54:28.335539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.561 [2024-12-14 06:54:28.335728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.561 [2024-12-14 06:54:28.335880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.561 [2024-12-14 06:54:28.336095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.561 [2024-12-14 06:54:28.336111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.561 [2024-12-14 06:54:28.336122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.561 [2024-12-14 06:54:28.336131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.561 [2024-12-14 06:54:28.336140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.338257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.561 [2024-12-14 06:54:28.338388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.338446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.338463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.561 [2024-12-14 06:54:28.338477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.338509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.338539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.338563] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.338573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.561 [2024-12-14 06:54:28.338587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.345250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.348319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.561 [2024-12-14 06:54:28.348419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.348472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.348486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.561 [2024-12-14 06:54:28.348495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.348510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.348537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.348545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.348553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.561 [2024-12-14 06:54:28.348565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.355300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.561 [2024-12-14 06:54:28.355567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.355616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.355631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.561 [2024-12-14 06:54:28.355642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.355673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.355686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.355695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.355704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.561 [2024-12-14 06:54:28.355718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.358390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.561 [2024-12-14 06:54:28.358539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.358582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.358613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.561 [2024-12-14 06:54:28.358622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.358651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.358664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.358672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.358685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.561 [2024-12-14 06:54:28.358712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.365517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.561 [2024-12-14 06:54:28.365860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.366074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.366243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.561 [2024-12-14 06:54:28.366358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.366384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.366433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.366446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.366455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.561 [2024-12-14 06:54:28.366472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.368468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.561 [2024-12-14 06:54:28.368794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.369043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.369099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.561 [2024-12-14 06:54:28.369285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.369342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.369359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.369367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.369375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.561 [2024-12-14 06:54:28.369390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.375823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.561 [2024-12-14 06:54:28.375942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.376022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.376038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.561 [2024-12-14 06:54:28.376048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.376064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.376076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.561 [2024-12-14 06:54:28.376084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.561 [2024-12-14 06:54:28.376092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.561 [2024-12-14 06:54:28.376105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.561 [2024-12-14 06:54:28.378771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.561 [2024-12-14 06:54:28.379201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.379298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.561 [2024-12-14 06:54:28.379326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.561 [2024-12-14 06:54:28.379358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.561 [2024-12-14 06:54:28.379399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.561 [2024-12-14 06:54:28.379424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.379439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.379452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.562 [2024-12-14 06:54:28.379499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.385910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.562 [2024-12-14 06:54:28.386037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.386083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.386098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.562 [2024-12-14 06:54:28.386124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.386206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.386236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.386246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.386255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.562 [2024-12-14 06:54:28.386270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.389117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.562 [2024-12-14 06:54:28.389303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.389379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.389394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.562 [2024-12-14 06:54:28.389403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.389419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.389431] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.389439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.389447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.562 [2024-12-14 06:54:28.389460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.396002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.562 [2024-12-14 06:54:28.396113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.396197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.396231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.562 [2024-12-14 06:54:28.396240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.396257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.396271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.396279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.396288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.562 [2024-12-14 06:54:28.396316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.399240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.562 [2024-12-14 06:54:28.399332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.399374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.399388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.562 [2024-12-14 06:54:28.399397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.399411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.399423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.399431] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.399438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.562 [2024-12-14 06:54:28.399451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.406084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.562 [2024-12-14 06:54:28.406227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.406274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.406290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.562 [2024-12-14 06:54:28.406300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.406315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.406329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.406337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.406345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.562 [2024-12-14 06:54:28.406359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.409306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.562 [2024-12-14 06:54:28.409419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.409470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.409484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.562 [2024-12-14 06:54:28.409507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.409521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.409532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.409539] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.409547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.562 [2024-12-14 06:54:28.409559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.416172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.562 [2024-12-14 06:54:28.416309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.416384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.416399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.562 [2024-12-14 06:54:28.416408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.416438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.416467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.416475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.416483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.562 [2024-12-14 06:54:28.416497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.419379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.562 [2024-12-14 06:54:28.419657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.419704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.419720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.562 [2024-12-14 06:54:28.419730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.419764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.419779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.419787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.419795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.562 [2024-12-14 06:54:28.419810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.426257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.562 [2024-12-14 06:54:28.426524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.426588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.426604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.562 [2024-12-14 06:54:28.426615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.562 [2024-12-14 06:54:28.426657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.562 [2024-12-14 06:54:28.426689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.562 [2024-12-14 06:54:28.426699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.562 [2024-12-14 06:54:28.426723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.562 [2024-12-14 06:54:28.426770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.562 [2024-12-14 06:54:28.429616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.562 [2024-12-14 06:54:28.429900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.562 [2024-12-14 06:54:28.429977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.429994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.563 [2024-12-14 06:54:28.430005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.430039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.430055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.430064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.430073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.563 [2024-12-14 06:54:28.430088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.436470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.563 [2024-12-14 06:54:28.436753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.436800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.436816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.563 [2024-12-14 06:54:28.436842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.436860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.436885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.436893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.436902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.563 [2024-12-14 06:54:28.436933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.439861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.563 [2024-12-14 06:54:28.440140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.440187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.440202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.563 [2024-12-14 06:54:28.440211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.440246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.440262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.440270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.440278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.563 [2024-12-14 06:54:28.440293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.446698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.563 [2024-12-14 06:54:28.447042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.447104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.447120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.563 [2024-12-14 06:54:28.447130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.447146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.447176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.447185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.447194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.563 [2024-12-14 06:54:28.447209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.450113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.563 [2024-12-14 06:54:28.450239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.450281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.450296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fab70 with addr=10.0.0.2, port=4420 00:24:14.563 [2024-12-14 06:54:28.450306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fab70 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.450338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fab70 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.450352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.450360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.450368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.563 [2024-12-14 06:54:28.450381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.456971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:14.563 [2024-12-14 06:54:28.457080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.457124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.563 [2024-12-14 06:54:28.457171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1496410 with addr=10.0.0.3, port=4420 00:24:14.563 [2024-12-14 06:54:28.457195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496410 is same with the state(5) to be set 00:24:14.563 [2024-12-14 06:54:28.457211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496410 (9): Bad file descriptor 00:24:14.563 [2024-12-14 06:54:28.457224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:14.563 [2024-12-14 06:54:28.457232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:14.563 [2024-12-14 06:54:28.457255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:14.563 [2024-12-14 06:54:28.457331] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:14.563 [2024-12-14 06:54:28.457351] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.563 [2024-12-14 06:54:28.457371] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.563 [2024-12-14 06:54:28.457406] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:24:14.563 [2024-12-14 06:54:28.457421] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:14.563 [2024-12-14 06:54:28.457436] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:14.563 [2024-12-14 06:54:28.457452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.563 [2024-12-14 06:54:28.543424] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.563 [2024-12-14 06:54:28.543474] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:15.499 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@68 -- # sort 00:24:15.499 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@68 -- # xargs 00:24:15.499 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:15.499 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.499 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@64 -- # sort 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@64 -- # xargs 00:24:15.499 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:15.499 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:15.499 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.499 06:54:29 -- host/mdns_discovery.sh@72 -- # xargs 00:24:15.499 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:15.757 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.757 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@72 -- # xargs 00:24:15.757 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:15.757 06:54:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:15.757 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.757 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.758 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.758 06:54:29 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:24:15.758 06:54:29 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:15.758 06:54:29 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:24:15.758 06:54:29 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:15.758 06:54:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.758 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:24:15.758 06:54:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.758 06:54:29 -- host/mdns_discovery.sh@172 -- # sleep 1 00:24:15.758 [2024-12-14 06:54:29.628569] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:16.695 06:54:30 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:24:16.695 06:54:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:16.695 06:54:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:16.695 06:54:30 -- host/mdns_discovery.sh@80 -- # sort 00:24:16.695 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.695 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.695 06:54:30 -- host/mdns_discovery.sh@80 -- # xargs 00:24:16.695 06:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:16.961 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.961 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@68 -- # sort 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@68 -- # xargs 00:24:16.961 06:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:16.961 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.961 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@64 -- # xargs 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@64 -- # sort 00:24:16.961 06:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:16.961 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.961 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 06:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:24:16.961 06:54:30 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:16.961 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.961 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.962 06:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.962 06:54:30 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:16.962 06:54:30 -- common/autotest_common.sh@650 -- # local es=0 00:24:16.962 06:54:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:16.962 06:54:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.962 06:54:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.962 06:54:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.962 06:54:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.962 06:54:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:16.962 06:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.962 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.962 [2024-12-14 06:54:30.863640] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:24:16.962 2024/12/14 06:54:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:16.962 request: 00:24:16.962 { 00:24:16.962 "method": "bdev_nvme_start_mdns_discovery", 00:24:16.962 "params": { 00:24:16.962 "name": "mdns", 00:24:16.962 "svcname": "_nvme-disc._http", 00:24:16.962 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:16.962 } 00:24:16.962 } 00:24:16.962 Got JSON-RPC error response 00:24:16.962 GoRPCClient: error on JSON-RPC call 00:24:16.962 06:54:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.962 06:54:30 -- common/autotest_common.sh@653 -- # es=1 00:24:16.962 06:54:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.962 06:54:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.962 06:54:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.962 06:54:30 -- host/mdns_discovery.sh@183 -- # sleep 5 00:24:17.556 [2024-12-14 06:54:31.252270] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:17.556 [2024-12-14 06:54:31.352267] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:17.556 [2024-12-14 06:54:31.452277] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:17.556 [2024-12-14 06:54:31.452497] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:17.556 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:17.556 cookie is 0 00:24:17.556 is_local: 1 00:24:17.556 our_own: 0 00:24:17.556 wide_area: 0 00:24:17.556 multicast: 1 00:24:17.556 cached: 1 00:24:17.815 [2024-12-14 06:54:31.552274] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:17.815 [2024-12-14 06:54:31.552464] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:24:17.815 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:17.815 cookie is 0 00:24:17.815 is_local: 1 00:24:17.815 our_own: 0 00:24:17.815 wide_area: 0 00:24:17.815 multicast: 1 00:24:17.815 cached: 1 00:24:18.750 [2024-12-14 06:54:32.460352] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:18.750 [2024-12-14 06:54:32.460555] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:18.750 [2024-12-14 06:54:32.460598] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:18.750 [2024-12-14 06:54:32.546577] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:24:18.750 [2024-12-14 06:54:32.559755] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:18.750 [2024-12-14 06:54:32.559774] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:18.750 [2024-12-14 06:54:32.559789] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.750 [2024-12-14 06:54:32.611181] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:18.750 [2024-12-14 06:54:32.611217] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:18.750 [2024-12-14 06:54:32.645595] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:24:18.750 [2024-12-14 06:54:32.704367] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:18.750 [2024-12-14 06:54:32.704390] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:22.041 06:54:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:22.041 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@80 -- # sort 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@80 -- # xargs 00:24:22.041 06:54:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@76 -- # sort 00:24:22.041 06:54:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.041 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@76 -- # xargs 00:24:22.041 06:54:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.041 06:54:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.041 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@64 -- # sort 00:24:22.041 06:54:35 -- host/mdns_discovery.sh@64 -- # xargs 00:24:22.300 06:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:22.300 06:54:36 -- common/autotest_common.sh@650 -- # local es=0 00:24:22.300 06:54:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:22.300 06:54:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:22.300 06:54:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.300 06:54:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:22.300 06:54:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.300 06:54:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:22.300 06:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.300 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 [2024-12-14 06:54:36.057751] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:24:22.300 2024/12/14 06:54:36 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:22.300 request: 00:24:22.300 { 00:24:22.300 "method": "bdev_nvme_start_mdns_discovery", 00:24:22.300 "params": { 00:24:22.300 "name": "cdc", 00:24:22.300 "svcname": "_nvme-disc._tcp", 00:24:22.300 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:22.300 } 00:24:22.300 } 00:24:22.300 Got JSON-RPC error response 00:24:22.300 GoRPCClient: error on JSON-RPC call 00:24:22.300 06:54:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:22.300 06:54:36 -- common/autotest_common.sh@653 -- # es=1 00:24:22.300 06:54:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:22.300 06:54:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:22.300 06:54:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:22.300 06:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@76 -- # sort 00:24:22.300 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@76 -- # xargs 00:24:22.300 06:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.300 06:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.300 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@64 -- # sort 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@64 -- # xargs 00:24:22.300 06:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:22.300 06:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.300 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 06:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@197 -- # kill 87944 00:24:22.300 06:54:36 -- host/mdns_discovery.sh@200 -- # wait 87944 00:24:22.559 [2024-12-14 06:54:36.356421] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:22.559 06:54:36 -- host/mdns_discovery.sh@201 -- # kill 88031 00:24:22.559 Got SIGTERM, quitting. 00:24:22.559 06:54:36 -- host/mdns_discovery.sh@202 -- # kill 87973 00:24:22.559 06:54:36 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:24:22.559 06:54:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.559 06:54:36 -- nvmf/common.sh@116 -- # sync 00:24:22.559 Got SIGTERM, quitting. 00:24:22.559 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:22.559 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:22.559 avahi-daemon 0.8 exiting. 00:24:22.817 06:54:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:22.818 06:54:36 -- nvmf/common.sh@119 -- # set +e 00:24:22.818 06:54:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.818 06:54:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:22.818 rmmod nvme_tcp 00:24:22.818 rmmod nvme_fabrics 00:24:22.818 rmmod nvme_keyring 00:24:22.818 06:54:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:22.818 06:54:36 -- nvmf/common.sh@123 -- # set -e 00:24:22.818 06:54:36 -- nvmf/common.sh@124 -- # return 0 00:24:22.818 06:54:36 -- nvmf/common.sh@477 -- # '[' -n 87893 ']' 00:24:22.818 06:54:36 -- nvmf/common.sh@478 -- # killprocess 87893 00:24:22.818 06:54:36 -- common/autotest_common.sh@936 -- # '[' -z 87893 ']' 00:24:22.818 06:54:36 -- common/autotest_common.sh@940 -- # kill -0 87893 00:24:22.818 06:54:36 -- common/autotest_common.sh@941 -- # uname 00:24:22.818 06:54:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.818 06:54:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87893 00:24:22.818 06:54:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:22.818 06:54:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:22.818 killing process with pid 87893 00:24:22.818 06:54:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87893' 00:24:22.818 06:54:36 -- common/autotest_common.sh@955 -- # kill 87893 00:24:22.818 06:54:36 -- common/autotest_common.sh@960 -- # wait 87893 00:24:23.076 06:54:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:23.076 06:54:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:23.076 06:54:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:23.076 06:54:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.076 06:54:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:23.076 06:54:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.076 06:54:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.076 06:54:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.334 06:54:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:23.334 ************************************ 00:24:23.334 END TEST nvmf_mdns_discovery 00:24:23.334 ************************************ 00:24:23.334 00:24:23.334 real 0m21.240s 00:24:23.334 user 0m41.158s 00:24:23.334 sys 0m2.229s 00:24:23.334 06:54:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:23.334 06:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:23.334 06:54:37 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:24:23.334 06:54:37 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:23.334 06:54:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:23.334 06:54:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:23.334 06:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:23.334 ************************************ 00:24:23.334 START TEST nvmf_multipath 00:24:23.334 ************************************ 00:24:23.334 06:54:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:23.334 * Looking for test storage... 00:24:23.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:23.334 06:54:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:23.334 06:54:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:23.334 06:54:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:23.334 06:54:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:23.334 06:54:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:23.334 06:54:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:23.334 06:54:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:23.334 06:54:37 -- scripts/common.sh@335 -- # IFS=.-: 00:24:23.334 06:54:37 -- scripts/common.sh@335 -- # read -ra ver1 00:24:23.334 06:54:37 -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.334 06:54:37 -- scripts/common.sh@336 -- # read -ra ver2 00:24:23.334 06:54:37 -- scripts/common.sh@337 -- # local 'op=<' 00:24:23.334 06:54:37 -- scripts/common.sh@339 -- # ver1_l=2 00:24:23.334 06:54:37 -- scripts/common.sh@340 -- # ver2_l=1 00:24:23.334 06:54:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:23.334 06:54:37 -- scripts/common.sh@343 -- # case "$op" in 00:24:23.334 06:54:37 -- scripts/common.sh@344 -- # : 1 00:24:23.334 06:54:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:23.334 06:54:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.334 06:54:37 -- scripts/common.sh@364 -- # decimal 1 00:24:23.334 06:54:37 -- scripts/common.sh@352 -- # local d=1 00:24:23.334 06:54:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.334 06:54:37 -- scripts/common.sh@354 -- # echo 1 00:24:23.334 06:54:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:23.334 06:54:37 -- scripts/common.sh@365 -- # decimal 2 00:24:23.334 06:54:37 -- scripts/common.sh@352 -- # local d=2 00:24:23.334 06:54:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.334 06:54:37 -- scripts/common.sh@354 -- # echo 2 00:24:23.592 06:54:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:23.592 06:54:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:23.592 06:54:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:23.592 06:54:37 -- scripts/common.sh@367 -- # return 0 00:24:23.592 06:54:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.592 06:54:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:23.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.592 --rc genhtml_branch_coverage=1 00:24:23.592 --rc genhtml_function_coverage=1 00:24:23.592 --rc genhtml_legend=1 00:24:23.592 --rc geninfo_all_blocks=1 00:24:23.592 --rc geninfo_unexecuted_blocks=1 00:24:23.592 00:24:23.592 ' 00:24:23.592 06:54:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:23.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.592 --rc genhtml_branch_coverage=1 00:24:23.592 --rc genhtml_function_coverage=1 00:24:23.592 --rc genhtml_legend=1 00:24:23.592 --rc geninfo_all_blocks=1 00:24:23.592 --rc geninfo_unexecuted_blocks=1 00:24:23.592 00:24:23.592 ' 00:24:23.592 06:54:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:23.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.592 --rc genhtml_branch_coverage=1 00:24:23.592 --rc genhtml_function_coverage=1 00:24:23.592 --rc genhtml_legend=1 00:24:23.592 --rc geninfo_all_blocks=1 00:24:23.592 --rc geninfo_unexecuted_blocks=1 00:24:23.592 00:24:23.592 ' 00:24:23.592 06:54:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:23.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.592 --rc genhtml_branch_coverage=1 00:24:23.592 --rc genhtml_function_coverage=1 00:24:23.592 --rc genhtml_legend=1 00:24:23.592 --rc geninfo_all_blocks=1 00:24:23.592 --rc geninfo_unexecuted_blocks=1 00:24:23.592 00:24:23.592 ' 00:24:23.592 06:54:37 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.592 06:54:37 -- nvmf/common.sh@7 -- # uname -s 00:24:23.592 06:54:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.592 06:54:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.592 06:54:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.592 06:54:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.592 06:54:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.592 06:54:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.592 06:54:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.592 06:54:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.592 06:54:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.592 06:54:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.592 06:54:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:24:23.592 06:54:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:24:23.592 06:54:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.592 06:54:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.592 06:54:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:23.592 06:54:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.593 06:54:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.593 06:54:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.593 06:54:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.593 06:54:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.593 06:54:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.593 06:54:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.593 06:54:37 -- paths/export.sh@5 -- # export PATH 00:24:23.593 06:54:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.593 06:54:37 -- nvmf/common.sh@46 -- # : 0 00:24:23.593 06:54:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:23.593 06:54:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:23.593 06:54:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:23.593 06:54:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.593 06:54:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.593 06:54:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:23.593 06:54:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:23.593 06:54:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:23.593 06:54:37 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:23.593 06:54:37 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:23.593 06:54:37 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.593 06:54:37 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:23.593 06:54:37 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.593 06:54:37 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:23.593 06:54:37 -- host/multipath.sh@30 -- # nvmftestinit 00:24:23.593 06:54:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:23.593 06:54:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.593 06:54:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:23.593 06:54:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:23.593 06:54:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:23.593 06:54:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.593 06:54:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.593 06:54:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.593 06:54:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:23.593 06:54:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:23.593 06:54:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:23.593 06:54:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:23.593 06:54:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:23.593 06:54:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:23.593 06:54:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.593 06:54:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.593 06:54:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:23.593 06:54:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:23.593 06:54:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:23.593 06:54:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:23.593 06:54:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:23.593 06:54:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.593 06:54:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:23.593 06:54:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:23.593 06:54:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:23.593 06:54:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:23.593 06:54:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:23.593 06:54:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:23.593 Cannot find device "nvmf_tgt_br" 00:24:23.593 06:54:37 -- nvmf/common.sh@154 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:23.593 Cannot find device "nvmf_tgt_br2" 00:24:23.593 06:54:37 -- nvmf/common.sh@155 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:23.593 06:54:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:23.593 Cannot find device "nvmf_tgt_br" 00:24:23.593 06:54:37 -- nvmf/common.sh@157 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:23.593 Cannot find device "nvmf_tgt_br2" 00:24:23.593 06:54:37 -- nvmf/common.sh@158 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:23.593 06:54:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:23.593 06:54:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:23.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.593 06:54:37 -- nvmf/common.sh@161 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:23.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.593 06:54:37 -- nvmf/common.sh@162 -- # true 00:24:23.593 06:54:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:23.593 06:54:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:23.593 06:54:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:23.593 06:54:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:23.593 06:54:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:23.593 06:54:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:23.593 06:54:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:23.593 06:54:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:23.593 06:54:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:23.593 06:54:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:23.593 06:54:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:23.593 06:54:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:23.852 06:54:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:23.852 06:54:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:23.852 06:54:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:23.852 06:54:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:23.852 06:54:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:23.852 06:54:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:23.852 06:54:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:23.852 06:54:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:23.852 06:54:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:23.852 06:54:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:23.852 06:54:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:23.852 06:54:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:23.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:23.852 00:24:23.852 --- 10.0.0.2 ping statistics --- 00:24:23.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.852 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:23.852 06:54:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:23.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:23.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:23.852 00:24:23.852 --- 10.0.0.3 ping statistics --- 00:24:23.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.852 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:23.852 06:54:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:23.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:24:23.852 00:24:23.852 --- 10.0.0.1 ping statistics --- 00:24:23.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.852 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:23.852 06:54:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.852 06:54:37 -- nvmf/common.sh@421 -- # return 0 00:24:23.852 06:54:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:23.852 06:54:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.852 06:54:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:23.852 06:54:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:23.852 06:54:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.852 06:54:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:23.852 06:54:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:23.852 06:54:37 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:23.852 06:54:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:23.852 06:54:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.852 06:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:23.852 06:54:37 -- nvmf/common.sh@469 -- # nvmfpid=88545 00:24:23.852 06:54:37 -- nvmf/common.sh@470 -- # waitforlisten 88545 00:24:23.852 06:54:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:23.852 06:54:37 -- common/autotest_common.sh@829 -- # '[' -z 88545 ']' 00:24:23.852 06:54:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.852 06:54:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.852 06:54:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.852 06:54:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.852 06:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:23.852 [2024-12-14 06:54:37.786467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:23.852 [2024-12-14 06:54:37.786557] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.110 [2024-12-14 06:54:37.929520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:24.110 [2024-12-14 06:54:38.062464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:24.110 [2024-12-14 06:54:38.062664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.110 [2024-12-14 06:54:38.062681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.110 [2024-12-14 06:54:38.062693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.111 [2024-12-14 06:54:38.062868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.111 [2024-12-14 06:54:38.062882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.047 06:54:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.047 06:54:38 -- common/autotest_common.sh@862 -- # return 0 00:24:25.047 06:54:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:25.047 06:54:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.047 06:54:38 -- common/autotest_common.sh@10 -- # set +x 00:24:25.047 06:54:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.047 06:54:38 -- host/multipath.sh@33 -- # nvmfapp_pid=88545 00:24:25.047 06:54:38 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:25.306 [2024-12-14 06:54:39.117270] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.306 06:54:39 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:25.565 Malloc0 00:24:25.565 06:54:39 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:25.824 06:54:39 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.082 06:54:39 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.341 [2024-12-14 06:54:40.177035] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.341 06:54:40 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:26.600 [2024-12-14 06:54:40.417202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.600 06:54:40 -- host/multipath.sh@44 -- # bdevperf_pid=88650 00:24:26.600 06:54:40 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:26.600 06:54:40 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:26.600 06:54:40 -- host/multipath.sh@47 -- # waitforlisten 88650 /var/tmp/bdevperf.sock 00:24:26.600 06:54:40 -- common/autotest_common.sh@829 -- # '[' -z 88650 ']' 00:24:26.600 06:54:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.600 06:54:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.600 06:54:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.600 06:54:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.600 06:54:40 -- common/autotest_common.sh@10 -- # set +x 00:24:27.536 06:54:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.536 06:54:41 -- common/autotest_common.sh@862 -- # return 0 00:24:27.536 06:54:41 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:27.795 06:54:41 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:28.054 Nvme0n1 00:24:28.313 06:54:42 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:28.571 Nvme0n1 00:24:28.571 06:54:42 -- host/multipath.sh@78 -- # sleep 1 00:24:28.571 06:54:42 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:29.507 06:54:43 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:29.507 06:54:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:29.766 06:54:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.024 06:54:43 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:30.024 06:54:43 -- host/multipath.sh@65 -- # dtrace_pid=88744 00:24:30.024 06:54:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:30.024 06:54:43 -- host/multipath.sh@66 -- # sleep 6 00:24:36.585 06:54:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:36.585 06:54:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:36.585 06:54:50 -- host/multipath.sh@67 -- # active_port=4421 00:24:36.586 06:54:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.586 Attaching 4 probes... 00:24:36.586 @path[10.0.0.2, 4421]: 19034 00:24:36.586 @path[10.0.0.2, 4421]: 19521 00:24:36.586 @path[10.0.0.2, 4421]: 17946 00:24:36.586 @path[10.0.0.2, 4421]: 17544 00:24:36.586 @path[10.0.0.2, 4421]: 16902 00:24:36.586 06:54:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:36.586 06:54:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:36.586 06:54:50 -- host/multipath.sh@69 -- # sed -n 1p 00:24:36.586 06:54:50 -- host/multipath.sh@69 -- # port=4421 00:24:36.586 06:54:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:36.586 06:54:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:36.586 06:54:50 -- host/multipath.sh@72 -- # kill 88744 00:24:36.586 06:54:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.586 06:54:50 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:36.586 06:54:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.586 06:54:50 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:36.844 06:54:50 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:36.845 06:54:50 -- host/multipath.sh@65 -- # dtrace_pid=88871 00:24:36.845 06:54:50 -- host/multipath.sh@66 -- # sleep 6 00:24:36.845 06:54:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:43.407 06:54:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:43.407 06:54:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:43.407 06:54:57 -- host/multipath.sh@67 -- # active_port=4420 00:24:43.407 06:54:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.407 Attaching 4 probes... 00:24:43.407 @path[10.0.0.2, 4420]: 18302 00:24:43.407 @path[10.0.0.2, 4420]: 21089 00:24:43.407 @path[10.0.0.2, 4420]: 20743 00:24:43.407 @path[10.0.0.2, 4420]: 21238 00:24:43.407 @path[10.0.0.2, 4420]: 21094 00:24:43.407 06:54:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:43.407 06:54:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:43.407 06:54:57 -- host/multipath.sh@69 -- # sed -n 1p 00:24:43.407 06:54:57 -- host/multipath.sh@69 -- # port=4420 00:24:43.407 06:54:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:43.407 06:54:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:43.407 06:54:57 -- host/multipath.sh@72 -- # kill 88871 00:24:43.407 06:54:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.407 06:54:57 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:43.407 06:54:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:43.667 06:54:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:43.938 06:54:57 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:43.938 06:54:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:43.938 06:54:57 -- host/multipath.sh@65 -- # dtrace_pid=89006 00:24:43.938 06:54:57 -- host/multipath.sh@66 -- # sleep 6 00:24:50.516 06:55:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:50.516 06:55:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:50.516 06:55:04 -- host/multipath.sh@67 -- # active_port=4421 00:24:50.516 06:55:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.516 Attaching 4 probes... 00:24:50.516 @path[10.0.0.2, 4421]: 17613 00:24:50.516 @path[10.0.0.2, 4421]: 20417 00:24:50.516 @path[10.0.0.2, 4421]: 18864 00:24:50.516 @path[10.0.0.2, 4421]: 18724 00:24:50.516 @path[10.0.0.2, 4421]: 18658 00:24:50.516 06:55:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:50.516 06:55:04 -- host/multipath.sh@69 -- # sed -n 1p 00:24:50.516 06:55:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:50.516 06:55:04 -- host/multipath.sh@69 -- # port=4421 00:24:50.516 06:55:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.516 06:55:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.516 06:55:04 -- host/multipath.sh@72 -- # kill 89006 00:24:50.516 06:55:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.516 06:55:04 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:50.516 06:55:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.516 06:55:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.516 06:55:04 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:50.516 06:55:04 -- host/multipath.sh@65 -- # dtrace_pid=89138 00:24:50.516 06:55:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:50.516 06:55:04 -- host/multipath.sh@66 -- # sleep 6 00:24:57.081 06:55:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:57.081 06:55:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:57.081 06:55:10 -- host/multipath.sh@67 -- # active_port= 00:24:57.081 06:55:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.081 Attaching 4 probes... 00:24:57.081 00:24:57.081 00:24:57.081 00:24:57.081 00:24:57.081 00:24:57.081 06:55:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:57.081 06:55:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:57.081 06:55:10 -- host/multipath.sh@69 -- # sed -n 1p 00:24:57.081 06:55:10 -- host/multipath.sh@69 -- # port= 00:24:57.081 06:55:10 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:57.081 06:55:10 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:57.081 06:55:10 -- host/multipath.sh@72 -- # kill 89138 00:24:57.081 06:55:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.081 06:55:10 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:57.081 06:55:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.081 06:55:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:57.648 06:55:11 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:57.648 06:55:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:57.648 06:55:11 -- host/multipath.sh@65 -- # dtrace_pid=89267 00:24:57.648 06:55:11 -- host/multipath.sh@66 -- # sleep 6 00:25:04.218 06:55:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:04.218 06:55:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:04.218 06:55:17 -- host/multipath.sh@67 -- # active_port=4421 00:25:04.218 06:55:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.218 Attaching 4 probes... 00:25:04.218 @path[10.0.0.2, 4421]: 20905 00:25:04.218 @path[10.0.0.2, 4421]: 21268 00:25:04.218 @path[10.0.0.2, 4421]: 21250 00:25:04.218 @path[10.0.0.2, 4421]: 21423 00:25:04.218 @path[10.0.0.2, 4421]: 21648 00:25:04.218 06:55:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:04.218 06:55:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:04.218 06:55:17 -- host/multipath.sh@69 -- # sed -n 1p 00:25:04.218 06:55:17 -- host/multipath.sh@69 -- # port=4421 00:25:04.218 06:55:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.218 06:55:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.218 06:55:17 -- host/multipath.sh@72 -- # kill 89267 00:25:04.218 06:55:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.218 06:55:17 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.218 [2024-12-14 06:55:17.894736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.218 [2024-12-14 06:55:17.894999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 [2024-12-14 06:55:17.895493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff800 is same with the state(5) to be set 00:25:04.219 06:55:17 -- host/multipath.sh@101 -- # sleep 1 00:25:05.156 06:55:18 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:05.156 06:55:18 -- host/multipath.sh@65 -- # dtrace_pid=89404 00:25:05.156 06:55:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:05.156 06:55:18 -- host/multipath.sh@66 -- # sleep 6 00:25:11.723 06:55:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:11.723 06:55:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:11.723 06:55:25 -- host/multipath.sh@67 -- # active_port=4420 00:25:11.723 06:55:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:11.723 Attaching 4 probes... 00:25:11.723 @path[10.0.0.2, 4420]: 21337 00:25:11.723 @path[10.0.0.2, 4420]: 21624 00:25:11.723 @path[10.0.0.2, 4420]: 21447 00:25:11.723 @path[10.0.0.2, 4420]: 21489 00:25:11.723 @path[10.0.0.2, 4420]: 21490 00:25:11.723 06:55:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:11.723 06:55:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:11.723 06:55:25 -- host/multipath.sh@69 -- # sed -n 1p 00:25:11.724 06:55:25 -- host/multipath.sh@69 -- # port=4420 00:25:11.724 06:55:25 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.724 06:55:25 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.724 06:55:25 -- host/multipath.sh@72 -- # kill 89404 00:25:11.724 06:55:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:11.724 06:55:25 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:11.724 [2024-12-14 06:55:25.519584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.724 06:55:25 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:11.982 06:55:25 -- host/multipath.sh@111 -- # sleep 6 00:25:18.548 06:55:31 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:18.548 06:55:31 -- host/multipath.sh@65 -- # dtrace_pid=89591 00:25:18.548 06:55:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88545 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:18.548 06:55:31 -- host/multipath.sh@66 -- # sleep 6 00:25:23.844 06:55:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:23.844 06:55:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:24.411 06:55:38 -- host/multipath.sh@67 -- # active_port=4421 00:25:24.411 06:55:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.411 Attaching 4 probes... 00:25:24.411 @path[10.0.0.2, 4421]: 20751 00:25:24.411 @path[10.0.0.2, 4421]: 20956 00:25:24.411 @path[10.0.0.2, 4421]: 20959 00:25:24.411 @path[10.0.0.2, 4421]: 20754 00:25:24.411 @path[10.0.0.2, 4421]: 19529 00:25:24.411 06:55:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:24.411 06:55:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:24.411 06:55:38 -- host/multipath.sh@69 -- # sed -n 1p 00:25:24.411 06:55:38 -- host/multipath.sh@69 -- # port=4421 00:25:24.411 06:55:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:24.411 06:55:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:24.411 06:55:38 -- host/multipath.sh@72 -- # kill 89591 00:25:24.411 06:55:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.411 06:55:38 -- host/multipath.sh@114 -- # killprocess 88650 00:25:24.411 06:55:38 -- common/autotest_common.sh@936 -- # '[' -z 88650 ']' 00:25:24.411 06:55:38 -- common/autotest_common.sh@940 -- # kill -0 88650 00:25:24.411 06:55:38 -- common/autotest_common.sh@941 -- # uname 00:25:24.411 06:55:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.411 06:55:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88650 00:25:24.411 killing process with pid 88650 00:25:24.411 06:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:24.411 06:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:24.411 06:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88650' 00:25:24.411 06:55:38 -- common/autotest_common.sh@955 -- # kill 88650 00:25:24.411 06:55:38 -- common/autotest_common.sh@960 -- # wait 88650 00:25:24.411 Connection closed with partial response: 00:25:24.411 00:25:24.411 00:25:24.678 06:55:38 -- host/multipath.sh@116 -- # wait 88650 00:25:24.678 06:55:38 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:24.678 [2024-12-14 06:54:40.479038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:24.678 [2024-12-14 06:54:40.479154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88650 ] 00:25:24.678 [2024-12-14 06:54:40.614730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.678 [2024-12-14 06:54:40.734263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.678 Running I/O for 90 seconds... 00:25:24.678 [2024-12-14 06:54:50.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.771260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.771422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.771933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.771982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.772934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.772948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.773026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.773425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.678 [2024-12-14 06:54:50.773492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.773526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.773562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.678 [2024-12-14 06:54:50.773582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.678 [2024-12-14 06:54:50.773607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.773641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.773686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.773719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.773768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.773815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.773880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.773978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.773999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.774014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.774562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.774604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.774972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.774987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.775370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.679 [2024-12-14 06:54:50.775457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.679 [2024-12-14 06:54:50.775761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.679 [2024-12-14 06:54:50.775776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.775813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.775849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.775885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.775921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.775993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.776024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.776061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.776087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.776856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.776885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.776910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.776926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.776946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.776991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.777910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.777931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.777991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.680 [2024-12-14 06:54:50.778201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.680 [2024-12-14 06:54:50.778337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.680 [2024-12-14 06:54:50.778353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:50.778375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:50.778404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:50.778425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:50.778456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:50.778477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:50.778502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:50.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:50.778537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:50.778558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:50.778573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.406856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.406927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.407155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.407191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.407342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.407408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.407427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.407441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.408404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.408448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.408536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.408895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.408932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.408955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.408969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.409196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.681 [2024-12-14 06:54:57.409233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.681 [2024-12-14 06:54:57.409293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.681 [2024-12-14 06:54:57.409307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.409735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.409954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.409996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.410640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.682 [2024-12-14 06:54:57.410696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.682 [2024-12-14 06:54:57.410931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.682 [2024-12-14 06:54:57.410946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.410984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.411955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.411971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.412008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.412035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.412060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:54:57.412075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:54:57.412100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:54:57.412115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.478913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.683 [2024-12-14 06:55:04.479535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.683 [2024-12-14 06:55:04.479586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.683 [2024-12-14 06:55:04.479599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.479679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.479890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.479905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.480650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.480731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.480767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.480804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.480852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.480900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.480986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.684 [2024-12-14 06:55:04.481569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.481966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.481980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.482039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.684 [2024-12-14 06:55:04.482057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.684 [2024-12-14 06:55:04.482081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.482542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.482579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.482729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.482807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.482960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.482974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.685 [2024-12-14 06:55:04.483717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.483946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.685 [2024-12-14 06:55:04.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.685 [2024-12-14 06:55:04.484014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:04.484486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:04.484515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:04.484530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.686 [2024-12-14 06:55:17.896502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.686 [2024-12-14 06:55:17.896776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.686 [2024-12-14 06:55:17.896796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.896823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.896851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.896877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.896904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.896946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.896975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.896988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.687 [2024-12-14 06:55:17.897895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.687 [2024-12-14 06:55:17.897968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.687 [2024-12-14 06:55:17.897984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.897998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.898874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.898970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.898991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.899005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.899032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.899059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.899085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.688 [2024-12-14 06:55:17.899111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.688 [2024-12-14 06:55:17.899145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.688 [2024-12-14 06:55:17.899159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.689 [2024-12-14 06:55:17.899560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.689 [2024-12-14 06:55:17.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b5b0 is same with the state(5) to be set 00:25:24.689 [2024-12-14 06:55:17.899795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.689 [2024-12-14 06:55:17.899805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.689 [2024-12-14 06:55:17.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124104 len:8 PRP1 0x0 PRP2 0x0 00:25:24.689 [2024-12-14 06:55:17.899827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.689 [2024-12-14 06:55:17.899893] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb8b5b0 was disconnected and freed. reset controller. 00:25:24.689 [2024-12-14 06:55:17.901308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.689 [2024-12-14 06:55:17.901399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2f790 (9): Bad file descriptor 00:25:24.689 [2024-12-14 06:55:17.901540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.689 [2024-12-14 06:55:17.901598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.689 [2024-12-14 06:55:17.901620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2f790 with addr=10.0.0.2, port=4421 00:25:24.689 [2024-12-14 06:55:17.901636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f790 is same with the state(5) to be set 00:25:24.689 [2024-12-14 06:55:17.901659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2f790 (9): Bad file descriptor 00:25:24.689 [2024-12-14 06:55:17.901682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:24.689 [2024-12-14 06:55:17.901696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:24.689 [2024-12-14 06:55:17.901710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.689 [2024-12-14 06:55:17.901737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:24.689 [2024-12-14 06:55:17.901752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.689 [2024-12-14 06:55:27.956298] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.689 Received shutdown signal, test time was about 55.578077 seconds 00:25:24.689 00:25:24.689 Latency(us) 00:25:24.689 [2024-12-14T06:55:38.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.689 [2024-12-14T06:55:38.681Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:24.689 Verification LBA range: start 0x0 length 0x4000 00:25:24.689 Nvme0n1 : 55.58 11586.13 45.26 0.00 0.00 11030.41 744.73 7015926.69 00:25:24.689 [2024-12-14T06:55:38.681Z] =================================================================================================================== 00:25:24.689 [2024-12-14T06:55:38.681Z] Total : 11586.13 45.26 0.00 0.00 11030.41 744.73 7015926.69 00:25:24.689 06:55:38 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.949 06:55:38 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:24.949 06:55:38 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:24.949 06:55:38 -- host/multipath.sh@125 -- # nvmftestfini 00:25:24.949 06:55:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:24.949 06:55:38 -- nvmf/common.sh@116 -- # sync 00:25:24.949 06:55:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:24.949 06:55:38 -- nvmf/common.sh@119 -- # set +e 00:25:24.949 06:55:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:24.949 06:55:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:24.949 rmmod nvme_tcp 00:25:24.949 rmmod nvme_fabrics 00:25:24.949 rmmod nvme_keyring 00:25:24.949 06:55:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:24.949 06:55:38 -- nvmf/common.sh@123 -- # set -e 00:25:24.949 06:55:38 -- nvmf/common.sh@124 -- # return 0 00:25:24.949 06:55:38 -- nvmf/common.sh@477 -- # '[' -n 88545 ']' 00:25:24.949 06:55:38 -- nvmf/common.sh@478 -- # killprocess 88545 00:25:24.949 06:55:38 -- common/autotest_common.sh@936 -- # '[' -z 88545 ']' 00:25:24.949 06:55:38 -- common/autotest_common.sh@940 -- # kill -0 88545 00:25:24.949 06:55:38 -- common/autotest_common.sh@941 -- # uname 00:25:24.949 06:55:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.949 06:55:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88545 00:25:24.949 06:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:24.949 06:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:24.949 killing process with pid 88545 00:25:24.949 06:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88545' 00:25:24.949 06:55:38 -- common/autotest_common.sh@955 -- # kill 88545 00:25:24.949 06:55:38 -- common/autotest_common.sh@960 -- # wait 88545 00:25:25.208 06:55:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:25.208 06:55:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:25.208 06:55:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:25.208 06:55:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.208 06:55:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:25.208 06:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.208 06:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.208 06:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.467 06:55:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:25.467 00:25:25.467 real 1m2.090s 00:25:25.467 user 2m54.767s 00:25:25.467 sys 0m14.524s 00:25:25.467 06:55:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:25.467 06:55:39 -- common/autotest_common.sh@10 -- # set +x 00:25:25.467 ************************************ 00:25:25.467 END TEST nvmf_multipath 00:25:25.467 ************************************ 00:25:25.467 06:55:39 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:25.467 06:55:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:25.467 06:55:39 -- common/autotest_common.sh@10 -- # set +x 00:25:25.467 ************************************ 00:25:25.467 START TEST nvmf_timeout 00:25:25.467 ************************************ 00:25:25.467 06:55:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:25.467 * Looking for test storage... 00:25:25.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:25.467 06:55:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:25.467 06:55:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:25.467 06:55:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:25.467 06:55:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:25.467 06:55:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:25.467 06:55:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:25.467 06:55:39 -- scripts/common.sh@335 -- # IFS=.-: 00:25:25.467 06:55:39 -- scripts/common.sh@335 -- # read -ra ver1 00:25:25.467 06:55:39 -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.467 06:55:39 -- scripts/common.sh@336 -- # read -ra ver2 00:25:25.467 06:55:39 -- scripts/common.sh@337 -- # local 'op=<' 00:25:25.467 06:55:39 -- scripts/common.sh@339 -- # ver1_l=2 00:25:25.467 06:55:39 -- scripts/common.sh@340 -- # ver2_l=1 00:25:25.467 06:55:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:25.467 06:55:39 -- scripts/common.sh@343 -- # case "$op" in 00:25:25.467 06:55:39 -- scripts/common.sh@344 -- # : 1 00:25:25.467 06:55:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:25.467 06:55:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.467 06:55:39 -- scripts/common.sh@364 -- # decimal 1 00:25:25.467 06:55:39 -- scripts/common.sh@352 -- # local d=1 00:25:25.467 06:55:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.467 06:55:39 -- scripts/common.sh@354 -- # echo 1 00:25:25.467 06:55:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:25.467 06:55:39 -- scripts/common.sh@365 -- # decimal 2 00:25:25.467 06:55:39 -- scripts/common.sh@352 -- # local d=2 00:25:25.467 06:55:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.467 06:55:39 -- scripts/common.sh@354 -- # echo 2 00:25:25.467 06:55:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:25.467 06:55:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:25.467 06:55:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:25.467 06:55:39 -- scripts/common.sh@367 -- # return 0 00:25:25.467 06:55:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:25.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.467 --rc genhtml_branch_coverage=1 00:25:25.467 --rc genhtml_function_coverage=1 00:25:25.467 --rc genhtml_legend=1 00:25:25.467 --rc geninfo_all_blocks=1 00:25:25.467 --rc geninfo_unexecuted_blocks=1 00:25:25.467 00:25:25.467 ' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:25.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.467 --rc genhtml_branch_coverage=1 00:25:25.467 --rc genhtml_function_coverage=1 00:25:25.467 --rc genhtml_legend=1 00:25:25.467 --rc geninfo_all_blocks=1 00:25:25.467 --rc geninfo_unexecuted_blocks=1 00:25:25.467 00:25:25.467 ' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:25.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.467 --rc genhtml_branch_coverage=1 00:25:25.467 --rc genhtml_function_coverage=1 00:25:25.467 --rc genhtml_legend=1 00:25:25.467 --rc geninfo_all_blocks=1 00:25:25.467 --rc geninfo_unexecuted_blocks=1 00:25:25.467 00:25:25.467 ' 00:25:25.467 06:55:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:25.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.467 --rc genhtml_branch_coverage=1 00:25:25.467 --rc genhtml_function_coverage=1 00:25:25.467 --rc genhtml_legend=1 00:25:25.467 --rc geninfo_all_blocks=1 00:25:25.467 --rc geninfo_unexecuted_blocks=1 00:25:25.467 00:25:25.467 ' 00:25:25.467 06:55:39 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:25.467 06:55:39 -- nvmf/common.sh@7 -- # uname -s 00:25:25.467 06:55:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.467 06:55:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.467 06:55:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.467 06:55:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.467 06:55:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.467 06:55:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.467 06:55:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.467 06:55:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.467 06:55:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.467 06:55:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.726 06:55:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:25:25.726 06:55:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:25:25.726 06:55:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.726 06:55:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.726 06:55:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:25.726 06:55:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:25.726 06:55:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.726 06:55:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.726 06:55:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.726 06:55:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.726 06:55:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.726 06:55:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.726 06:55:39 -- paths/export.sh@5 -- # export PATH 00:25:25.726 06:55:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.726 06:55:39 -- nvmf/common.sh@46 -- # : 0 00:25:25.726 06:55:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:25.726 06:55:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:25.726 06:55:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:25.726 06:55:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.726 06:55:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.726 06:55:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:25.726 06:55:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:25.726 06:55:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:25.726 06:55:39 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.726 06:55:39 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.726 06:55:39 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.726 06:55:39 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:25.726 06:55:39 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.726 06:55:39 -- host/timeout.sh@19 -- # nvmftestinit 00:25:25.726 06:55:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:25.726 06:55:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.726 06:55:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:25.726 06:55:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:25.726 06:55:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:25.726 06:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.726 06:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.726 06:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.726 06:55:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:25.726 06:55:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:25.726 06:55:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:25.726 06:55:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:25.727 06:55:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:25.727 06:55:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:25.727 06:55:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.727 06:55:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.727 06:55:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:25.727 06:55:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:25.727 06:55:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:25.727 06:55:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:25.727 06:55:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:25.727 06:55:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.727 06:55:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:25.727 06:55:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:25.727 06:55:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:25.727 06:55:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:25.727 06:55:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:25.727 06:55:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:25.727 Cannot find device "nvmf_tgt_br" 00:25:25.727 06:55:39 -- nvmf/common.sh@154 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.727 Cannot find device "nvmf_tgt_br2" 00:25:25.727 06:55:39 -- nvmf/common.sh@155 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:25.727 06:55:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:25.727 Cannot find device "nvmf_tgt_br" 00:25:25.727 06:55:39 -- nvmf/common.sh@157 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:25.727 Cannot find device "nvmf_tgt_br2" 00:25:25.727 06:55:39 -- nvmf/common.sh@158 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:25.727 06:55:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:25.727 06:55:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:25.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.727 06:55:39 -- nvmf/common.sh@161 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:25.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.727 06:55:39 -- nvmf/common.sh@162 -- # true 00:25:25.727 06:55:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:25.727 06:55:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:25.727 06:55:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:25.727 06:55:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:25.727 06:55:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:25.727 06:55:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:25.727 06:55:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:25.727 06:55:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:25.727 06:55:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:25.727 06:55:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:25.727 06:55:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:25.727 06:55:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:25.727 06:55:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:25.727 06:55:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:25.727 06:55:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:25.727 06:55:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:25.727 06:55:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:25.727 06:55:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:25.985 06:55:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:25.986 06:55:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:25.986 06:55:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:25.986 06:55:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:25.986 06:55:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:25.986 06:55:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:25.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:25:25.986 00:25:25.986 --- 10.0.0.2 ping statistics --- 00:25:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.986 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:25.986 06:55:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:25.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:25.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:25:25.986 00:25:25.986 --- 10.0.0.3 ping statistics --- 00:25:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.986 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:25.986 06:55:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:25.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:25:25.986 00:25:25.986 --- 10.0.0.1 ping statistics --- 00:25:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.986 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:25.986 06:55:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.986 06:55:39 -- nvmf/common.sh@421 -- # return 0 00:25:25.986 06:55:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:25.986 06:55:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.986 06:55:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:25.986 06:55:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:25.986 06:55:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.986 06:55:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:25.986 06:55:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:25.986 06:55:39 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:25.986 06:55:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:25.986 06:55:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.986 06:55:39 -- common/autotest_common.sh@10 -- # set +x 00:25:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.986 06:55:39 -- nvmf/common.sh@469 -- # nvmfpid=89921 00:25:25.986 06:55:39 -- nvmf/common.sh@470 -- # waitforlisten 89921 00:25:25.986 06:55:39 -- common/autotest_common.sh@829 -- # '[' -z 89921 ']' 00:25:25.986 06:55:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.986 06:55:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.986 06:55:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.986 06:55:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:25.986 06:55:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.986 06:55:39 -- common/autotest_common.sh@10 -- # set +x 00:25:25.986 [2024-12-14 06:55:39.866535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:25.986 [2024-12-14 06:55:39.866601] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.244 [2024-12-14 06:55:39.999547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:26.244 [2024-12-14 06:55:40.082183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.244 [2024-12-14 06:55:40.082341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.244 [2024-12-14 06:55:40.082355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.244 [2024-12-14 06:55:40.082363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.244 [2024-12-14 06:55:40.082545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.244 [2024-12-14 06:55:40.082557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.180 06:55:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.180 06:55:40 -- common/autotest_common.sh@862 -- # return 0 00:25:27.180 06:55:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:27.180 06:55:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.180 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:25:27.180 06:55:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.180 06:55:40 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.180 06:55:40 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:27.439 [2024-12-14 06:55:41.205160] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.439 06:55:41 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:27.697 Malloc0 00:25:27.697 06:55:41 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.956 06:55:41 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.214 06:55:41 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.473 [2024-12-14 06:55:42.209581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.473 06:55:42 -- host/timeout.sh@32 -- # bdevperf_pid=90018 00:25:28.473 06:55:42 -- host/timeout.sh@34 -- # waitforlisten 90018 /var/tmp/bdevperf.sock 00:25:28.473 06:55:42 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:28.473 06:55:42 -- common/autotest_common.sh@829 -- # '[' -z 90018 ']' 00:25:28.473 06:55:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.473 06:55:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.473 06:55:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.473 06:55:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.473 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:28.473 [2024-12-14 06:55:42.285616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:28.473 [2024-12-14 06:55:42.285698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90018 ] 00:25:28.473 [2024-12-14 06:55:42.421241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.732 [2024-12-14 06:55:42.519485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.300 06:55:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.300 06:55:43 -- common/autotest_common.sh@862 -- # return 0 00:25:29.300 06:55:43 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:29.559 06:55:43 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:29.818 NVMe0n1 00:25:29.818 06:55:43 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.818 06:55:43 -- host/timeout.sh@51 -- # rpc_pid=90061 00:25:29.818 06:55:43 -- host/timeout.sh@53 -- # sleep 1 00:25:30.077 Running I/O for 10 seconds... 00:25:31.013 06:55:44 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.275 [2024-12-14 06:55:45.049894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.049986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.049998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.275 [2024-12-14 06:55:45.050147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.050377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:25:31.276 [2024-12-14 06:55:45.051017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.276 [2024-12-14 06:55:45.051452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.276 [2024-12-14 06:55:45.051494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.276 [2024-12-14 06:55:45.051502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.051866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.052049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.052067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.052087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.277 [2024-12-14 06:55:45.052105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.277 [2024-12-14 06:55:45.052115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.277 [2024-12-14 06:55:45.052123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.278 [2024-12-14 06:55:45.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.278 [2024-12-14 06:55:45.052761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.278 [2024-12-14 06:55:45.052769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.052909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.052925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.052941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.052977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.052990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.279 [2024-12-14 06:55:45.053312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.279 [2024-12-14 06:55:45.053406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.279 [2024-12-14 06:55:45.053414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.280 [2024-12-14 06:55:45.053514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149a050 is same with the state(5) to be set 00:25:31.280 [2024-12-14 06:55:45.053540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.280 [2024-12-14 06:55:45.053547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.280 [2024-12-14 06:55:45.053553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130968 len:8 PRP1 0x0 PRP2 0x0 00:25:31.280 [2024-12-14 06:55:45.053561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.280 [2024-12-14 06:55:45.053622] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x149a050 was disconnected and freed. reset controller. 00:25:31.280 [2024-12-14 06:55:45.053813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.280 [2024-12-14 06:55:45.053885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424dc0 (9): Bad file descriptor 00:25:31.280 [2024-12-14 06:55:45.054029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-12-14 06:55:45.054080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.280 [2024-12-14 06:55:45.054095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424dc0 with addr=10.0.0.2, port=4420 00:25:31.280 [2024-12-14 06:55:45.054105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424dc0 is same with the state(5) to be set 00:25:31.280 [2024-12-14 06:55:45.054123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424dc0 (9): Bad file descriptor 00:25:31.280 [2024-12-14 06:55:45.054145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.280 [2024-12-14 06:55:45.054163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.280 [2024-12-14 06:55:45.054173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.280 [2024-12-14 06:55:45.054211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.280 [2024-12-14 06:55:45.054236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.280 06:55:45 -- host/timeout.sh@56 -- # sleep 2 00:25:33.184 [2024-12-14 06:55:47.054363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.184 [2024-12-14 06:55:47.054453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.184 [2024-12-14 06:55:47.054472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424dc0 with addr=10.0.0.2, port=4420 00:25:33.184 [2024-12-14 06:55:47.054485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424dc0 is same with the state(5) to be set 00:25:33.184 [2024-12-14 06:55:47.054510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424dc0 (9): Bad file descriptor 00:25:33.184 [2024-12-14 06:55:47.054545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.184 [2024-12-14 06:55:47.054610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.184 [2024-12-14 06:55:47.054620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.184 [2024-12-14 06:55:47.054647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.184 [2024-12-14 06:55:47.054659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.184 06:55:47 -- host/timeout.sh@57 -- # get_controller 00:25:33.184 06:55:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.184 06:55:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:33.444 06:55:47 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:33.444 06:55:47 -- host/timeout.sh@58 -- # get_bdev 00:25:33.444 06:55:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:33.444 06:55:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:33.797 06:55:47 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:33.797 06:55:47 -- host/timeout.sh@61 -- # sleep 5 00:25:35.174 [2024-12-14 06:55:49.054759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.174 [2024-12-14 06:55:49.054845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.174 [2024-12-14 06:55:49.054862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424dc0 with addr=10.0.0.2, port=4420 00:25:35.174 [2024-12-14 06:55:49.054875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424dc0 is same with the state(5) to be set 00:25:35.174 [2024-12-14 06:55:49.054899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424dc0 (9): Bad file descriptor 00:25:35.174 [2024-12-14 06:55:49.054917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.174 [2024-12-14 06:55:49.054926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.174 [2024-12-14 06:55:49.054935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.174 [2024-12-14 06:55:49.054972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.174 [2024-12-14 06:55:49.054994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.078 [2024-12-14 06:55:51.055043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.078 [2024-12-14 06:55:51.055104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.078 [2024-12-14 06:55:51.055124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.078 [2024-12-14 06:55:51.055134] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:37.078 [2024-12-14 06:55:51.055164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:38.456 00:25:38.456 Latency(us) 00:25:38.456 [2024-12-14T06:55:52.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.456 [2024-12-14T06:55:52.448Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:38.456 Verification LBA range: start 0x0 length 0x4000 00:25:38.456 NVMe0n1 : 8.16 1998.06 7.80 15.68 0.00 63482.72 2621.44 7015926.69 00:25:38.456 [2024-12-14T06:55:52.448Z] =================================================================================================================== 00:25:38.456 [2024-12-14T06:55:52.448Z] Total : 1998.06 7.80 15.68 0.00 63482.72 2621.44 7015926.69 00:25:38.456 0 00:25:38.715 06:55:52 -- host/timeout.sh@62 -- # get_controller 00:25:38.715 06:55:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.715 06:55:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:38.973 06:55:52 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:38.973 06:55:52 -- host/timeout.sh@63 -- # get_bdev 00:25:38.973 06:55:52 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:38.973 06:55:52 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:39.233 06:55:53 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:39.233 06:55:53 -- host/timeout.sh@65 -- # wait 90061 00:25:39.233 06:55:53 -- host/timeout.sh@67 -- # killprocess 90018 00:25:39.233 06:55:53 -- common/autotest_common.sh@936 -- # '[' -z 90018 ']' 00:25:39.233 06:55:53 -- common/autotest_common.sh@940 -- # kill -0 90018 00:25:39.233 06:55:53 -- common/autotest_common.sh@941 -- # uname 00:25:39.233 06:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.233 06:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90018 00:25:39.233 06:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:39.233 06:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:39.233 killing process with pid 90018 00:25:39.233 06:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90018' 00:25:39.233 06:55:53 -- common/autotest_common.sh@955 -- # kill 90018 00:25:39.233 Received shutdown signal, test time was about 9.285290 seconds 00:25:39.233 00:25:39.233 Latency(us) 00:25:39.233 [2024-12-14T06:55:53.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.233 [2024-12-14T06:55:53.225Z] =================================================================================================================== 00:25:39.233 [2024-12-14T06:55:53.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.233 06:55:53 -- common/autotest_common.sh@960 -- # wait 90018 00:25:39.801 06:55:53 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.801 [2024-12-14 06:55:53.750039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.801 06:55:53 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:39.801 06:55:53 -- host/timeout.sh@74 -- # bdevperf_pid=90220 00:25:39.801 06:55:53 -- host/timeout.sh@76 -- # waitforlisten 90220 /var/tmp/bdevperf.sock 00:25:39.801 06:55:53 -- common/autotest_common.sh@829 -- # '[' -z 90220 ']' 00:25:39.801 06:55:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.801 06:55:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.801 06:55:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.801 06:55:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.801 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:25:40.060 [2024-12-14 06:55:53.814515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:40.060 [2024-12-14 06:55:53.814649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90220 ] 00:25:40.060 [2024-12-14 06:55:53.938787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.060 [2024-12-14 06:55:54.026239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.997 06:55:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.997 06:55:54 -- common/autotest_common.sh@862 -- # return 0 00:25:40.997 06:55:54 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:40.997 06:55:54 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:41.258 NVMe0n1 00:25:41.258 06:55:55 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:41.258 06:55:55 -- host/timeout.sh@84 -- # rpc_pid=90262 00:25:41.258 06:55:55 -- host/timeout.sh@86 -- # sleep 1 00:25:41.516 Running I/O for 10 seconds... 00:25:42.456 06:55:56 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.456 [2024-12-14 06:55:56.402451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.402953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb70 is same with the state(5) to be set 00:25:42.456 [2024-12-14 06:55:56.403385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.456 [2024-12-14 06:55:56.403670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.456 [2024-12-14 06:55:56.403679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.403973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.403983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.403990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.457 [2024-12-14 06:55:56.404370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.457 [2024-12-14 06:55:56.404378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.457 [2024-12-14 06:55:56.404385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.404975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.404984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.404992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.405008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.405025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.405043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.458 [2024-12-14 06:55:56.405060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.405078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.458 [2024-12-14 06:55:56.405092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.458 [2024-12-14 06:55:56.405100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:42.459 [2024-12-14 06:55:56.405460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.459 [2024-12-14 06:55:56.405635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908050 is same with the state(5) to be set 00:25:42.459 [2024-12-14 06:55:56.405677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:42.459 [2024-12-14 06:55:56.405684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:42.459 [2024-12-14 06:55:56.405691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128928 len:8 PRP1 0x0 PRP2 0x0 00:25:42.459 [2024-12-14 06:55:56.405699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.459 [2024-12-14 06:55:56.405753] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x908050 was disconnected and freed. reset controller. 00:25:42.459 [2024-12-14 06:55:56.405952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.459 [2024-12-14 06:55:56.406024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:42.459 [2024-12-14 06:55:56.406121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.459 [2024-12-14 06:55:56.406163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.459 [2024-12-14 06:55:56.406178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:42.459 [2024-12-14 06:55:56.406187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:42.459 [2024-12-14 06:55:56.406219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:42.459 [2024-12-14 06:55:56.406234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.459 [2024-12-14 06:55:56.406243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.459 [2024-12-14 06:55:56.406252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.459 [2024-12-14 06:55:56.406269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.459 [2024-12-14 06:55:56.406279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.459 06:55:56 -- host/timeout.sh@90 -- # sleep 1 00:25:43.837 [2024-12-14 06:55:57.406347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.837 [2024-12-14 06:55:57.406415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.837 [2024-12-14 06:55:57.406432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:43.837 [2024-12-14 06:55:57.406443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:43.837 [2024-12-14 06:55:57.406460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:43.837 [2024-12-14 06:55:57.406476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.837 [2024-12-14 06:55:57.406484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.837 [2024-12-14 06:55:57.406492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.837 [2024-12-14 06:55:57.406509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.837 [2024-12-14 06:55:57.406545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.837 06:55:57 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.837 [2024-12-14 06:55:57.625141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.837 06:55:57 -- host/timeout.sh@92 -- # wait 90262 00:25:44.773 [2024-12-14 06:55:58.417482] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:51.340 00:25:51.340 Latency(us) 00:25:51.340 [2024-12-14T06:56:05.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.340 [2024-12-14T06:56:05.332Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:51.340 Verification LBA range: start 0x0 length 0x4000 00:25:51.340 NVMe0n1 : 10.01 9516.05 37.17 0.00 0.00 13432.75 1169.22 3019898.88 00:25:51.340 [2024-12-14T06:56:05.332Z] =================================================================================================================== 00:25:51.340 [2024-12-14T06:56:05.332Z] Total : 9516.05 37.17 0.00 0.00 13432.75 1169.22 3019898.88 00:25:51.340 0 00:25:51.340 06:56:05 -- host/timeout.sh@97 -- # rpc_pid=90379 00:25:51.340 06:56:05 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:51.340 06:56:05 -- host/timeout.sh@98 -- # sleep 1 00:25:51.604 Running I/O for 10 seconds... 00:25:52.574 06:56:06 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.574 [2024-12-14 06:56:06.496585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.574 [2024-12-14 06:56:06.496862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.496992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249c70 is same with the state(5) to be set 00:25:52.575 [2024-12-14 06:56:06.497629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.497983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.497995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.575 [2024-12-14 06:56:06.498112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-14 06:56:06.498120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-14 06:56:06.498800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-14 06:56:06.498868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.576 [2024-12-14 06:56:06.498878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.498885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.498894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.498901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.498910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.498917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.498933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.498942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.498967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.498991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-14 06:56:06.499243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-14 06:56:06.499298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-14 06:56:06.499315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-14 06:56:06.499519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-14 06:56:06.499551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-14 06:56:06.499615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-14 06:56:06.499622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-14 06:56:06.499876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.499988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.499999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.500007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.500017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-14 06:56:06.500026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.500041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x903f00 is same with the state(5) to be set 00:25:52.578 [2024-12-14 06:56:06.500052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.578 [2024-12-14 06:56:06.500059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.578 [2024-12-14 06:56:06.500067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117760 len:8 PRP1 0x0 PRP2 0x0 00:25:52.578 [2024-12-14 06:56:06.500074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.578 [2024-12-14 06:56:06.500137] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x903f00 was disconnected and freed. reset controller. 00:25:52.578 [2024-12-14 06:56:06.500341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.578 [2024-12-14 06:56:06.500418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:52.578 [2024-12-14 06:56:06.500537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.578 [2024-12-14 06:56:06.500589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.578 [2024-12-14 06:56:06.500610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:52.578 [2024-12-14 06:56:06.500619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:52.578 [2024-12-14 06:56:06.500642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:52.578 [2024-12-14 06:56:06.500657] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.578 [2024-12-14 06:56:06.500666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.578 [2024-12-14 06:56:06.500675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.578 [2024-12-14 06:56:06.500693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.578 [2024-12-14 06:56:06.500703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.578 06:56:06 -- host/timeout.sh@101 -- # sleep 3 00:25:53.515 [2024-12-14 06:56:07.500768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.515 [2024-12-14 06:56:07.500853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.515 [2024-12-14 06:56:07.500870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:53.515 [2024-12-14 06:56:07.500881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:53.515 [2024-12-14 06:56:07.500899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:53.515 [2024-12-14 06:56:07.500914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.515 [2024-12-14 06:56:07.500922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.515 [2024-12-14 06:56:07.500931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.515 [2024-12-14 06:56:07.500949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.515 [2024-12-14 06:56:07.500975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.892 [2024-12-14 06:56:08.501051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.892 [2024-12-14 06:56:08.501124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.892 [2024-12-14 06:56:08.501140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:54.892 [2024-12-14 06:56:08.501150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:54.892 [2024-12-14 06:56:08.501166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:54.892 [2024-12-14 06:56:08.501191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.892 [2024-12-14 06:56:08.501199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.892 [2024-12-14 06:56:08.501207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.892 [2024-12-14 06:56:08.501223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.892 [2024-12-14 06:56:08.501234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.829 [2024-12-14 06:56:09.502999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.829 [2024-12-14 06:56:09.503061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.829 [2024-12-14 06:56:09.503078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892dc0 with addr=10.0.0.2, port=4420 00:25:55.829 [2024-12-14 06:56:09.503089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dc0 is same with the state(5) to be set 00:25:55.829 [2024-12-14 06:56:09.503175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892dc0 (9): Bad file descriptor 00:25:55.829 [2024-12-14 06:56:09.503325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.829 [2024-12-14 06:56:09.503348] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:55.829 [2024-12-14 06:56:09.503371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.829 [2024-12-14 06:56:09.505214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.829 [2024-12-14 06:56:09.505239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.829 06:56:09 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.829 [2024-12-14 06:56:09.775547] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.829 06:56:09 -- host/timeout.sh@103 -- # wait 90379 00:25:56.766 [2024-12-14 06:56:10.539136] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:02.032 00:26:02.032 Latency(us) 00:26:02.032 [2024-12-14T06:56:16.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.032 [2024-12-14T06:56:16.024Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.032 Verification LBA range: start 0x0 length 0x4000 00:26:02.032 NVMe0n1 : 10.01 8955.29 34.98 7278.44 0.00 7872.97 681.43 3019898.88 00:26:02.033 [2024-12-14T06:56:16.025Z] =================================================================================================================== 00:26:02.033 [2024-12-14T06:56:16.025Z] Total : 8955.29 34.98 7278.44 0.00 7872.97 0.00 3019898.88 00:26:02.033 0 00:26:02.033 06:56:15 -- host/timeout.sh@105 -- # killprocess 90220 00:26:02.033 06:56:15 -- common/autotest_common.sh@936 -- # '[' -z 90220 ']' 00:26:02.033 06:56:15 -- common/autotest_common.sh@940 -- # kill -0 90220 00:26:02.033 06:56:15 -- common/autotest_common.sh@941 -- # uname 00:26:02.033 06:56:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:02.033 06:56:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90220 00:26:02.033 06:56:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:02.033 06:56:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:02.033 06:56:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90220' 00:26:02.033 killing process with pid 90220 00:26:02.033 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.033 00:26:02.033 Latency(us) 00:26:02.033 [2024-12-14T06:56:16.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.033 [2024-12-14T06:56:16.025Z] =================================================================================================================== 00:26:02.033 [2024-12-14T06:56:16.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.033 06:56:15 -- common/autotest_common.sh@955 -- # kill 90220 00:26:02.033 06:56:15 -- common/autotest_common.sh@960 -- # wait 90220 00:26:02.033 06:56:15 -- host/timeout.sh@110 -- # bdevperf_pid=90504 00:26:02.033 06:56:15 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:02.033 06:56:15 -- host/timeout.sh@112 -- # waitforlisten 90504 /var/tmp/bdevperf.sock 00:26:02.033 06:56:15 -- common/autotest_common.sh@829 -- # '[' -z 90504 ']' 00:26:02.033 06:56:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.033 06:56:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.033 06:56:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.033 06:56:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.033 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.033 [2024-12-14 06:56:15.866526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:02.033 [2024-12-14 06:56:15.866637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90504 ] 00:26:02.033 [2024-12-14 06:56:16.004328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.291 [2024-12-14 06:56:16.107447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.858 06:56:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.858 06:56:16 -- common/autotest_common.sh@862 -- # return 0 00:26:02.858 06:56:16 -- host/timeout.sh@116 -- # dtrace_pid=90528 00:26:02.858 06:56:16 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:02.858 06:56:16 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:03.116 06:56:17 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:03.683 NVMe0n1 00:26:03.683 06:56:17 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:03.683 06:56:17 -- host/timeout.sh@124 -- # rpc_pid=90587 00:26:03.683 06:56:17 -- host/timeout.sh@125 -- # sleep 1 00:26:03.683 Running I/O for 10 seconds... 00:26:04.618 06:56:18 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.881 [2024-12-14 06:56:18.663856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.663997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.881 [2024-12-14 06:56:18.664492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.882 [2024-12-14 06:56:18.664499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.882 [2024-12-14 06:56:18.664505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:26:04.882 [2024-12-14 06:56:18.664760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.664979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.664988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.882 [2024-12-14 06:56:18.665527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.882 [2024-12-14 06:56:18.665537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.665999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.883 [2024-12-14 06:56:18.666296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.883 [2024-12-14 06:56:18.666304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.666987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.666995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.667006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.667014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.667024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.667032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.667043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.667051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.667061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.884 [2024-12-14 06:56:18.667070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.884 [2024-12-14 06:56:18.667080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.885 [2024-12-14 06:56:18.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5c050 is same with the state(5) to be set 00:26:04.885 [2024-12-14 06:56:18.667322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.885 [2024-12-14 06:56:18.667329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.885 [2024-12-14 06:56:18.667337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70632 len:8 PRP1 0x0 PRP2 0x0 00:26:04.885 [2024-12-14 06:56:18.667346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.885 [2024-12-14 06:56:18.667414] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c5c050 was disconnected and freed. reset controller. 00:26:04.885 [2024-12-14 06:56:18.667673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.885 [2024-12-14 06:56:18.667771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6dc0 (9): Bad file descriptor 00:26:04.885 [2024-12-14 06:56:18.667874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.885 [2024-12-14 06:56:18.667926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.885 [2024-12-14 06:56:18.667955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6dc0 with addr=10.0.0.2, port=4420 00:26:04.885 [2024-12-14 06:56:18.667968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6dc0 is same with the state(5) to be set 00:26:04.885 [2024-12-14 06:56:18.667986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6dc0 (9): Bad file descriptor 00:26:04.885 [2024-12-14 06:56:18.668002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.885 [2024-12-14 06:56:18.668011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.885 [2024-12-14 06:56:18.668021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.885 [2024-12-14 06:56:18.668040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.885 [2024-12-14 06:56:18.668050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.885 06:56:18 -- host/timeout.sh@128 -- # wait 90587 00:26:06.838 [2024-12-14 06:56:20.668253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.838 [2024-12-14 06:56:20.668377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.838 [2024-12-14 06:56:20.668396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6dc0 with addr=10.0.0.2, port=4420 00:26:06.838 [2024-12-14 06:56:20.668411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6dc0 is same with the state(5) to be set 00:26:06.838 [2024-12-14 06:56:20.668438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6dc0 (9): Bad file descriptor 00:26:06.838 [2024-12-14 06:56:20.668458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.838 [2024-12-14 06:56:20.668467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.838 [2024-12-14 06:56:20.668478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.838 [2024-12-14 06:56:20.668505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.838 [2024-12-14 06:56:20.668516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.739 [2024-12-14 06:56:22.668610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.739 [2024-12-14 06:56:22.668703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.739 [2024-12-14 06:56:22.668721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6dc0 with addr=10.0.0.2, port=4420 00:26:08.739 [2024-12-14 06:56:22.668733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6dc0 is same with the state(5) to be set 00:26:08.739 [2024-12-14 06:56:22.668752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6dc0 (9): Bad file descriptor 00:26:08.739 [2024-12-14 06:56:22.668779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.739 [2024-12-14 06:56:22.668790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.739 [2024-12-14 06:56:22.668799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.739 [2024-12-14 06:56:22.668818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.739 [2024-12-14 06:56:22.668828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.276 [2024-12-14 06:56:24.668860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.276 [2024-12-14 06:56:24.668910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.276 [2024-12-14 06:56:24.668921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.276 [2024-12-14 06:56:24.668931] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:11.276 [2024-12-14 06:56:24.668957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.843 00:26:11.843 Latency(us) 00:26:11.843 [2024-12-14T06:56:25.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.843 [2024-12-14T06:56:25.835Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:11.843 NVMe0n1 : 8.13 2738.08 10.70 15.74 0.00 46432.83 1869.27 7015926.69 00:26:11.843 [2024-12-14T06:56:25.835Z] =================================================================================================================== 00:26:11.843 [2024-12-14T06:56:25.835Z] Total : 2738.08 10.70 15.74 0.00 46432.83 1869.27 7015926.69 00:26:11.843 0 00:26:11.843 06:56:25 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:11.843 Attaching 5 probes... 00:26:11.843 1286.114742: reset bdev controller NVMe0 00:26:11.843 1286.265674: reconnect bdev controller NVMe0 00:26:11.843 3286.528573: reconnect delay bdev controller NVMe0 00:26:11.843 3286.565003: reconnect bdev controller NVMe0 00:26:11.843 5286.992245: reconnect delay bdev controller NVMe0 00:26:11.843 5287.007017: reconnect bdev controller NVMe0 00:26:11.843 7287.296093: reconnect delay bdev controller NVMe0 00:26:11.843 7287.309717: reconnect bdev controller NVMe0 00:26:11.843 06:56:25 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:11.843 06:56:25 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:11.843 06:56:25 -- host/timeout.sh@136 -- # kill 90528 00:26:11.843 06:56:25 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:11.843 06:56:25 -- host/timeout.sh@139 -- # killprocess 90504 00:26:11.843 06:56:25 -- common/autotest_common.sh@936 -- # '[' -z 90504 ']' 00:26:11.843 06:56:25 -- common/autotest_common.sh@940 -- # kill -0 90504 00:26:11.843 06:56:25 -- common/autotest_common.sh@941 -- # uname 00:26:11.843 06:56:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:11.843 06:56:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90504 00:26:11.843 killing process with pid 90504 00:26:11.843 Received shutdown signal, test time was about 8.202630 seconds 00:26:11.843 00:26:11.843 Latency(us) 00:26:11.843 [2024-12-14T06:56:25.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.843 [2024-12-14T06:56:25.835Z] =================================================================================================================== 00:26:11.843 [2024-12-14T06:56:25.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.843 06:56:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:11.843 06:56:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:11.843 06:56:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90504' 00:26:11.843 06:56:25 -- common/autotest_common.sh@955 -- # kill 90504 00:26:11.843 06:56:25 -- common/autotest_common.sh@960 -- # wait 90504 00:26:12.101 06:56:26 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.359 06:56:26 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:12.359 06:56:26 -- host/timeout.sh@145 -- # nvmftestfini 00:26:12.359 06:56:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:12.359 06:56:26 -- nvmf/common.sh@116 -- # sync 00:26:12.617 06:56:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:12.617 06:56:26 -- nvmf/common.sh@119 -- # set +e 00:26:12.617 06:56:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:12.617 06:56:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:12.617 rmmod nvme_tcp 00:26:12.617 rmmod nvme_fabrics 00:26:12.617 rmmod nvme_keyring 00:26:12.617 06:56:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:12.617 06:56:26 -- nvmf/common.sh@123 -- # set -e 00:26:12.617 06:56:26 -- nvmf/common.sh@124 -- # return 0 00:26:12.617 06:56:26 -- nvmf/common.sh@477 -- # '[' -n 89921 ']' 00:26:12.617 06:56:26 -- nvmf/common.sh@478 -- # killprocess 89921 00:26:12.617 06:56:26 -- common/autotest_common.sh@936 -- # '[' -z 89921 ']' 00:26:12.617 06:56:26 -- common/autotest_common.sh@940 -- # kill -0 89921 00:26:12.617 06:56:26 -- common/autotest_common.sh@941 -- # uname 00:26:12.617 06:56:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:12.617 06:56:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89921 00:26:12.617 killing process with pid 89921 00:26:12.617 06:56:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:12.617 06:56:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:12.617 06:56:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89921' 00:26:12.617 06:56:26 -- common/autotest_common.sh@955 -- # kill 89921 00:26:12.617 06:56:26 -- common/autotest_common.sh@960 -- # wait 89921 00:26:12.875 06:56:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:12.875 06:56:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:12.875 06:56:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:12.875 06:56:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.875 06:56:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:12.875 06:56:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.875 06:56:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.875 06:56:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.875 06:56:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:12.875 00:26:12.875 real 0m47.550s 00:26:12.875 user 2m18.826s 00:26:12.875 sys 0m5.543s 00:26:12.875 06:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:12.875 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:26:12.875 ************************************ 00:26:12.875 END TEST nvmf_timeout 00:26:12.875 ************************************ 00:26:13.133 06:56:26 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:26:13.133 06:56:26 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:26:13.133 06:56:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.133 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.133 06:56:26 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:26:13.133 00:26:13.133 real 18m56.148s 00:26:13.133 user 60m27.886s 00:26:13.133 sys 4m2.323s 00:26:13.133 ************************************ 00:26:13.133 END TEST nvmf_tcp 00:26:13.133 ************************************ 00:26:13.133 06:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:13.133 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.133 06:56:26 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:26:13.133 06:56:26 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:13.133 06:56:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:13.133 06:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.133 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.133 ************************************ 00:26:13.133 START TEST spdkcli_nvmf_tcp 00:26:13.133 ************************************ 00:26:13.133 06:56:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:13.133 * Looking for test storage... 00:26:13.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:13.133 06:56:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:13.133 06:56:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:13.133 06:56:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:13.133 06:56:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:13.133 06:56:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:13.133 06:56:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:13.133 06:56:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:13.133 06:56:27 -- scripts/common.sh@335 -- # IFS=.-: 00:26:13.133 06:56:27 -- scripts/common.sh@335 -- # read -ra ver1 00:26:13.133 06:56:27 -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.133 06:56:27 -- scripts/common.sh@336 -- # read -ra ver2 00:26:13.133 06:56:27 -- scripts/common.sh@337 -- # local 'op=<' 00:26:13.133 06:56:27 -- scripts/common.sh@339 -- # ver1_l=2 00:26:13.133 06:56:27 -- scripts/common.sh@340 -- # ver2_l=1 00:26:13.133 06:56:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:13.133 06:56:27 -- scripts/common.sh@343 -- # case "$op" in 00:26:13.133 06:56:27 -- scripts/common.sh@344 -- # : 1 00:26:13.133 06:56:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:13.133 06:56:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.133 06:56:27 -- scripts/common.sh@364 -- # decimal 1 00:26:13.133 06:56:27 -- scripts/common.sh@352 -- # local d=1 00:26:13.133 06:56:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.133 06:56:27 -- scripts/common.sh@354 -- # echo 1 00:26:13.133 06:56:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:13.133 06:56:27 -- scripts/common.sh@365 -- # decimal 2 00:26:13.133 06:56:27 -- scripts/common.sh@352 -- # local d=2 00:26:13.133 06:56:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.133 06:56:27 -- scripts/common.sh@354 -- # echo 2 00:26:13.391 06:56:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:13.391 06:56:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:13.391 06:56:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:13.391 06:56:27 -- scripts/common.sh@367 -- # return 0 00:26:13.391 06:56:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.391 06:56:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.391 --rc genhtml_branch_coverage=1 00:26:13.391 --rc genhtml_function_coverage=1 00:26:13.391 --rc genhtml_legend=1 00:26:13.391 --rc geninfo_all_blocks=1 00:26:13.391 --rc geninfo_unexecuted_blocks=1 00:26:13.391 00:26:13.391 ' 00:26:13.391 06:56:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.391 --rc genhtml_branch_coverage=1 00:26:13.391 --rc genhtml_function_coverage=1 00:26:13.391 --rc genhtml_legend=1 00:26:13.391 --rc geninfo_all_blocks=1 00:26:13.391 --rc geninfo_unexecuted_blocks=1 00:26:13.391 00:26:13.391 ' 00:26:13.391 06:56:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.391 --rc genhtml_branch_coverage=1 00:26:13.391 --rc genhtml_function_coverage=1 00:26:13.391 --rc genhtml_legend=1 00:26:13.391 --rc geninfo_all_blocks=1 00:26:13.391 --rc geninfo_unexecuted_blocks=1 00:26:13.391 00:26:13.391 ' 00:26:13.391 06:56:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.391 --rc genhtml_branch_coverage=1 00:26:13.391 --rc genhtml_function_coverage=1 00:26:13.391 --rc genhtml_legend=1 00:26:13.391 --rc geninfo_all_blocks=1 00:26:13.391 --rc geninfo_unexecuted_blocks=1 00:26:13.391 00:26:13.391 ' 00:26:13.391 06:56:27 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:13.391 06:56:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:13.391 06:56:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:13.391 06:56:27 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:13.391 06:56:27 -- nvmf/common.sh@7 -- # uname -s 00:26:13.391 06:56:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.391 06:56:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.391 06:56:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.391 06:56:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.391 06:56:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.391 06:56:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.391 06:56:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.391 06:56:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.391 06:56:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.391 06:56:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.391 06:56:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:13.391 06:56:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:13.391 06:56:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.391 06:56:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.391 06:56:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:13.391 06:56:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:13.391 06:56:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.391 06:56:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.391 06:56:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.391 06:56:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.391 06:56:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.391 06:56:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.391 06:56:27 -- paths/export.sh@5 -- # export PATH 00:26:13.392 06:56:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.392 06:56:27 -- nvmf/common.sh@46 -- # : 0 00:26:13.392 06:56:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:13.392 06:56:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:13.392 06:56:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:13.392 06:56:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.392 06:56:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.392 06:56:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:13.392 06:56:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:13.392 06:56:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:13.392 06:56:27 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:13.392 06:56:27 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:13.392 06:56:27 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:13.392 06:56:27 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:13.392 06:56:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.392 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 06:56:27 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:13.392 06:56:27 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90816 00:26:13.392 06:56:27 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:13.392 06:56:27 -- spdkcli/common.sh@34 -- # waitforlisten 90816 00:26:13.392 06:56:27 -- common/autotest_common.sh@829 -- # '[' -z 90816 ']' 00:26:13.392 06:56:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.392 06:56:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.392 06:56:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.392 06:56:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.392 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 [2024-12-14 06:56:27.199336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:13.392 [2024-12-14 06:56:27.199419] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90816 ] 00:26:13.392 [2024-12-14 06:56:27.331382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.650 [2024-12-14 06:56:27.417739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:13.650 [2024-12-14 06:56:27.418074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.650 [2024-12-14 06:56:27.418087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.585 06:56:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.585 06:56:28 -- common/autotest_common.sh@862 -- # return 0 00:26:14.585 06:56:28 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:14.585 06:56:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.585 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:26:14.585 06:56:28 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:14.585 06:56:28 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:14.585 06:56:28 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:14.585 06:56:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.585 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:26:14.585 06:56:28 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:14.585 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:14.585 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:14.585 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:14.585 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:14.585 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:14.585 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:14.585 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.585 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.585 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:14.585 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:14.585 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:14.585 ' 00:26:14.844 [2024-12-14 06:56:28.732965] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:17.374 [2024-12-14 06:56:31.004801] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.317 [2024-12-14 06:56:32.289845] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:20.875 [2024-12-14 06:56:34.675324] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:22.774 [2024-12-14 06:56:36.724629] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:24.673 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:24.673 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:24.673 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:24.673 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:24.673 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:24.673 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:24.673 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:24.673 06:56:38 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:24.673 06:56:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:24.673 06:56:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.673 06:56:38 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:24.673 06:56:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:24.673 06:56:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.673 06:56:38 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:24.673 06:56:38 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:25.239 06:56:38 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:25.239 06:56:38 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:25.239 06:56:38 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:25.239 06:56:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.239 06:56:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.239 06:56:39 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:25.239 06:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.239 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:26:25.239 06:56:39 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:25.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:25.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:25.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:25.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:25.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:25.239 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:25.239 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:25.239 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:25.239 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:25.239 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:25.240 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:25.240 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:25.240 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:25.240 ' 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:30.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:30.530 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:30.530 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:30.530 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:30.791 06:56:44 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:30.791 06:56:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:30.791 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:30.791 06:56:44 -- spdkcli/nvmf.sh@90 -- # killprocess 90816 00:26:30.791 06:56:44 -- common/autotest_common.sh@936 -- # '[' -z 90816 ']' 00:26:30.791 06:56:44 -- common/autotest_common.sh@940 -- # kill -0 90816 00:26:30.791 06:56:44 -- common/autotest_common.sh@941 -- # uname 00:26:30.791 06:56:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.791 06:56:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90816 00:26:30.791 06:56:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:30.791 06:56:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:30.791 killing process with pid 90816 00:26:30.791 06:56:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90816' 00:26:30.791 06:56:44 -- common/autotest_common.sh@955 -- # kill 90816 00:26:30.791 [2024-12-14 06:56:44.700663] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:30.791 06:56:44 -- common/autotest_common.sh@960 -- # wait 90816 00:26:31.052 06:56:45 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:31.052 06:56:45 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:31.052 06:56:45 -- spdkcli/common.sh@13 -- # '[' -n 90816 ']' 00:26:31.052 06:56:45 -- spdkcli/common.sh@14 -- # killprocess 90816 00:26:31.052 06:56:45 -- common/autotest_common.sh@936 -- # '[' -z 90816 ']' 00:26:31.052 06:56:45 -- common/autotest_common.sh@940 -- # kill -0 90816 00:26:31.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90816) - No such process 00:26:31.052 Process with pid 90816 is not found 00:26:31.052 06:56:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 90816 is not found' 00:26:31.052 06:56:45 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:31.052 06:56:45 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:31.052 06:56:45 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:31.052 00:26:31.052 real 0m18.048s 00:26:31.052 user 0m39.125s 00:26:31.052 sys 0m0.999s 00:26:31.052 06:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:31.052 06:56:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.052 ************************************ 00:26:31.052 END TEST spdkcli_nvmf_tcp 00:26:31.052 ************************************ 00:26:31.325 06:56:45 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:31.325 06:56:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:31.325 06:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.325 06:56:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.325 ************************************ 00:26:31.325 START TEST nvmf_identify_passthru 00:26:31.325 ************************************ 00:26:31.325 06:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:31.325 * Looking for test storage... 00:26:31.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:31.325 06:56:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:31.325 06:56:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:31.325 06:56:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:31.325 06:56:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:31.325 06:56:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:31.325 06:56:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:31.325 06:56:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:31.325 06:56:45 -- scripts/common.sh@335 -- # IFS=.-: 00:26:31.325 06:56:45 -- scripts/common.sh@335 -- # read -ra ver1 00:26:31.325 06:56:45 -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.325 06:56:45 -- scripts/common.sh@336 -- # read -ra ver2 00:26:31.325 06:56:45 -- scripts/common.sh@337 -- # local 'op=<' 00:26:31.325 06:56:45 -- scripts/common.sh@339 -- # ver1_l=2 00:26:31.325 06:56:45 -- scripts/common.sh@340 -- # ver2_l=1 00:26:31.325 06:56:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:31.325 06:56:45 -- scripts/common.sh@343 -- # case "$op" in 00:26:31.325 06:56:45 -- scripts/common.sh@344 -- # : 1 00:26:31.325 06:56:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:31.325 06:56:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.325 06:56:45 -- scripts/common.sh@364 -- # decimal 1 00:26:31.325 06:56:45 -- scripts/common.sh@352 -- # local d=1 00:26:31.325 06:56:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.325 06:56:45 -- scripts/common.sh@354 -- # echo 1 00:26:31.325 06:56:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:31.325 06:56:45 -- scripts/common.sh@365 -- # decimal 2 00:26:31.325 06:56:45 -- scripts/common.sh@352 -- # local d=2 00:26:31.325 06:56:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.325 06:56:45 -- scripts/common.sh@354 -- # echo 2 00:26:31.325 06:56:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:31.326 06:56:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:31.326 06:56:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:31.326 06:56:45 -- scripts/common.sh@367 -- # return 0 00:26:31.326 06:56:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.326 06:56:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:31.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.326 --rc genhtml_branch_coverage=1 00:26:31.326 --rc genhtml_function_coverage=1 00:26:31.326 --rc genhtml_legend=1 00:26:31.326 --rc geninfo_all_blocks=1 00:26:31.326 --rc geninfo_unexecuted_blocks=1 00:26:31.326 00:26:31.326 ' 00:26:31.326 06:56:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:31.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.326 --rc genhtml_branch_coverage=1 00:26:31.326 --rc genhtml_function_coverage=1 00:26:31.326 --rc genhtml_legend=1 00:26:31.326 --rc geninfo_all_blocks=1 00:26:31.326 --rc geninfo_unexecuted_blocks=1 00:26:31.326 00:26:31.326 ' 00:26:31.326 06:56:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:31.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.326 --rc genhtml_branch_coverage=1 00:26:31.326 --rc genhtml_function_coverage=1 00:26:31.326 --rc genhtml_legend=1 00:26:31.326 --rc geninfo_all_blocks=1 00:26:31.326 --rc geninfo_unexecuted_blocks=1 00:26:31.326 00:26:31.326 ' 00:26:31.326 06:56:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:31.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.326 --rc genhtml_branch_coverage=1 00:26:31.326 --rc genhtml_function_coverage=1 00:26:31.326 --rc genhtml_legend=1 00:26:31.326 --rc geninfo_all_blocks=1 00:26:31.326 --rc geninfo_unexecuted_blocks=1 00:26:31.326 00:26:31.326 ' 00:26:31.326 06:56:45 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.326 06:56:45 -- nvmf/common.sh@7 -- # uname -s 00:26:31.326 06:56:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.326 06:56:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.326 06:56:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.326 06:56:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.326 06:56:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.326 06:56:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.326 06:56:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.326 06:56:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.326 06:56:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.326 06:56:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:31.326 06:56:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:31.326 06:56:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.326 06:56:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.326 06:56:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.326 06:56:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.326 06:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.326 06:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.326 06:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.326 06:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@5 -- # export PATH 00:26:31.326 06:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- nvmf/common.sh@46 -- # : 0 00:26:31.326 06:56:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:31.326 06:56:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:31.326 06:56:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:31.326 06:56:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.326 06:56:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.326 06:56:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:31.326 06:56:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:31.326 06:56:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:31.326 06:56:45 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.326 06:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.326 06:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.326 06:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.326 06:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- paths/export.sh@5 -- # export PATH 00:26:31.326 06:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.326 06:56:45 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:31.326 06:56:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:31.326 06:56:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.326 06:56:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:31.326 06:56:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:31.326 06:56:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:31.326 06:56:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.326 06:56:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:31.326 06:56:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.326 06:56:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:31.326 06:56:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:31.326 06:56:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.326 06:56:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.326 06:56:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:31.326 06:56:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:31.326 06:56:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.326 06:56:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.326 06:56:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.326 06:56:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.326 06:56:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.326 06:56:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.326 06:56:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.326 06:56:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.326 06:56:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:31.586 06:56:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:31.586 Cannot find device "nvmf_tgt_br" 00:26:31.586 06:56:45 -- nvmf/common.sh@154 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.586 Cannot find device "nvmf_tgt_br2" 00:26:31.586 06:56:45 -- nvmf/common.sh@155 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:31.586 06:56:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:31.586 Cannot find device "nvmf_tgt_br" 00:26:31.586 06:56:45 -- nvmf/common.sh@157 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:31.586 Cannot find device "nvmf_tgt_br2" 00:26:31.586 06:56:45 -- nvmf/common.sh@158 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:31.586 06:56:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:31.586 06:56:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.586 06:56:45 -- nvmf/common.sh@161 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.586 06:56:45 -- nvmf/common.sh@162 -- # true 00:26:31.586 06:56:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.586 06:56:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.586 06:56:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.586 06:56:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.586 06:56:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.586 06:56:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.586 06:56:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.586 06:56:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:31.586 06:56:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:31.586 06:56:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:31.586 06:56:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:31.586 06:56:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:31.586 06:56:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:31.586 06:56:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.586 06:56:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.586 06:56:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.586 06:56:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:31.586 06:56:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:31.586 06:56:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.586 06:56:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.586 06:56:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.846 06:56:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.846 06:56:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.846 06:56:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:31.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:26:31.846 00:26:31.846 --- 10.0.0.2 ping statistics --- 00:26:31.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.846 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:31.846 06:56:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:31.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:26:31.846 00:26:31.846 --- 10.0.0.3 ping statistics --- 00:26:31.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.846 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:31.846 06:56:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:26:31.846 00:26:31.846 --- 10.0.0.1 ping statistics --- 00:26:31.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.846 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:31.846 06:56:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.846 06:56:45 -- nvmf/common.sh@421 -- # return 0 00:26:31.846 06:56:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:31.846 06:56:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.846 06:56:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:31.846 06:56:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:31.846 06:56:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.846 06:56:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:31.846 06:56:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:31.846 06:56:45 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:31.846 06:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.846 06:56:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.846 06:56:45 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:31.846 06:56:45 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:31.846 06:56:45 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:31.846 06:56:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:31.846 06:56:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:31.846 06:56:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:31.846 06:56:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:31.846 06:56:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:31.846 06:56:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:31.846 06:56:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:31.846 06:56:45 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:26:31.846 06:56:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:31.846 06:56:45 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:31.846 06:56:45 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:26:31.846 06:56:45 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:26:31.846 06:56:45 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:31.846 06:56:45 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:31.846 06:56:45 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:32.107 06:56:45 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:32.107 06:56:45 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:32.107 06:56:45 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:32.107 06:56:45 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:32.107 06:56:46 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:32.107 06:56:46 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:32.107 06:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.107 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:26:32.366 06:56:46 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:32.366 06:56:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.366 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:26:32.366 06:56:46 -- target/identify_passthru.sh@31 -- # nvmfpid=91319 00:26:32.366 06:56:46 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:32.366 06:56:46 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.366 06:56:46 -- target/identify_passthru.sh@35 -- # waitforlisten 91319 00:26:32.366 06:56:46 -- common/autotest_common.sh@829 -- # '[' -z 91319 ']' 00:26:32.366 06:56:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.366 06:56:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.366 06:56:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.366 06:56:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.366 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:26:32.366 [2024-12-14 06:56:46.170361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:32.366 [2024-12-14 06:56:46.170486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.366 [2024-12-14 06:56:46.310167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.627 [2024-12-14 06:56:46.417474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:32.627 [2024-12-14 06:56:46.417660] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.627 [2024-12-14 06:56:46.417675] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.627 [2024-12-14 06:56:46.417685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.627 [2024-12-14 06:56:46.417831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.627 [2024-12-14 06:56:46.418036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.627 [2024-12-14 06:56:46.418674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.627 [2024-12-14 06:56:46.418733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.561 06:56:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.561 06:56:47 -- common/autotest_common.sh@862 -- # return 0 00:26:33.561 06:56:47 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 [2024-12-14 06:56:47.330754] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 [2024-12-14 06:56:47.345209] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:33.561 06:56:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 06:56:47 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 Nvme0n1 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 [2024-12-14 06:56:47.490803] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:33.561 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.561 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.561 [2024-12-14 06:56:47.498500] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:33.561 [ 00:26:33.561 { 00:26:33.561 "allow_any_host": true, 00:26:33.561 "hosts": [], 00:26:33.561 "listen_addresses": [], 00:26:33.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:33.561 "subtype": "Discovery" 00:26:33.561 }, 00:26:33.561 { 00:26:33.561 "allow_any_host": true, 00:26:33.561 "hosts": [], 00:26:33.561 "listen_addresses": [ 00:26:33.561 { 00:26:33.561 "adrfam": "IPv4", 00:26:33.561 "traddr": "10.0.0.2", 00:26:33.561 "transport": "TCP", 00:26:33.561 "trsvcid": "4420", 00:26:33.561 "trtype": "TCP" 00:26:33.561 } 00:26:33.561 ], 00:26:33.561 "max_cntlid": 65519, 00:26:33.561 "max_namespaces": 1, 00:26:33.561 "min_cntlid": 1, 00:26:33.561 "model_number": "SPDK bdev Controller", 00:26:33.561 "namespaces": [ 00:26:33.561 { 00:26:33.561 "bdev_name": "Nvme0n1", 00:26:33.561 "name": "Nvme0n1", 00:26:33.561 "nguid": "ED0A22FAFF3144A4B8A9A70C264E5201", 00:26:33.561 "nsid": 1, 00:26:33.561 "uuid": "ed0a22fa-ff31-44a4-b8a9-a70c264e5201" 00:26:33.561 } 00:26:33.561 ], 00:26:33.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.561 "serial_number": "SPDK00000000000001", 00:26:33.561 "subtype": "NVMe" 00:26:33.561 } 00:26:33.561 ] 00:26:33.561 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.561 06:56:47 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:33.561 06:56:47 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:33.561 06:56:47 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:33.819 06:56:47 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:33.819 06:56:47 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:33.820 06:56:47 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:33.820 06:56:47 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:34.077 06:56:47 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:34.077 06:56:47 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:34.077 06:56:47 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:34.077 06:56:47 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.077 06:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.077 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:26:34.077 06:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.077 06:56:47 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:34.077 06:56:47 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:34.077 06:56:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:34.077 06:56:47 -- nvmf/common.sh@116 -- # sync 00:26:34.077 06:56:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:34.077 06:56:48 -- nvmf/common.sh@119 -- # set +e 00:26:34.077 06:56:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:34.077 06:56:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:34.077 rmmod nvme_tcp 00:26:34.078 rmmod nvme_fabrics 00:26:34.078 rmmod nvme_keyring 00:26:34.335 06:56:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:34.335 06:56:48 -- nvmf/common.sh@123 -- # set -e 00:26:34.335 06:56:48 -- nvmf/common.sh@124 -- # return 0 00:26:34.335 06:56:48 -- nvmf/common.sh@477 -- # '[' -n 91319 ']' 00:26:34.335 06:56:48 -- nvmf/common.sh@478 -- # killprocess 91319 00:26:34.335 06:56:48 -- common/autotest_common.sh@936 -- # '[' -z 91319 ']' 00:26:34.335 06:56:48 -- common/autotest_common.sh@940 -- # kill -0 91319 00:26:34.335 06:56:48 -- common/autotest_common.sh@941 -- # uname 00:26:34.336 06:56:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.336 06:56:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91319 00:26:34.336 killing process with pid 91319 00:26:34.336 06:56:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:34.336 06:56:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:34.336 06:56:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91319' 00:26:34.336 06:56:48 -- common/autotest_common.sh@955 -- # kill 91319 00:26:34.336 [2024-12-14 06:56:48.120193] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:34.336 06:56:48 -- common/autotest_common.sh@960 -- # wait 91319 00:26:34.593 06:56:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:34.593 06:56:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:34.593 06:56:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:34.593 06:56:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.593 06:56:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:34.593 06:56:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.593 06:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:34.593 06:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.593 06:56:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:34.593 00:26:34.593 real 0m3.395s 00:26:34.594 user 0m8.362s 00:26:34.594 sys 0m0.935s 00:26:34.594 06:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:34.594 06:56:48 -- common/autotest_common.sh@10 -- # set +x 00:26:34.594 ************************************ 00:26:34.594 END TEST nvmf_identify_passthru 00:26:34.594 ************************************ 00:26:34.594 06:56:48 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:34.594 06:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:34.594 06:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.594 06:56:48 -- common/autotest_common.sh@10 -- # set +x 00:26:34.594 ************************************ 00:26:34.594 START TEST nvmf_dif 00:26:34.594 ************************************ 00:26:34.594 06:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:34.852 * Looking for test storage... 00:26:34.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:34.852 06:56:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:34.852 06:56:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:34.852 06:56:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:34.852 06:56:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:34.852 06:56:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:34.852 06:56:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:34.852 06:56:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:34.852 06:56:48 -- scripts/common.sh@335 -- # IFS=.-: 00:26:34.852 06:56:48 -- scripts/common.sh@335 -- # read -ra ver1 00:26:34.852 06:56:48 -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.852 06:56:48 -- scripts/common.sh@336 -- # read -ra ver2 00:26:34.852 06:56:48 -- scripts/common.sh@337 -- # local 'op=<' 00:26:34.853 06:56:48 -- scripts/common.sh@339 -- # ver1_l=2 00:26:34.853 06:56:48 -- scripts/common.sh@340 -- # ver2_l=1 00:26:34.853 06:56:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:34.853 06:56:48 -- scripts/common.sh@343 -- # case "$op" in 00:26:34.853 06:56:48 -- scripts/common.sh@344 -- # : 1 00:26:34.853 06:56:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:34.853 06:56:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.853 06:56:48 -- scripts/common.sh@364 -- # decimal 1 00:26:34.853 06:56:48 -- scripts/common.sh@352 -- # local d=1 00:26:34.853 06:56:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.853 06:56:48 -- scripts/common.sh@354 -- # echo 1 00:26:34.853 06:56:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:34.853 06:56:48 -- scripts/common.sh@365 -- # decimal 2 00:26:34.853 06:56:48 -- scripts/common.sh@352 -- # local d=2 00:26:34.853 06:56:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.853 06:56:48 -- scripts/common.sh@354 -- # echo 2 00:26:34.853 06:56:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:34.853 06:56:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:34.853 06:56:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:34.853 06:56:48 -- scripts/common.sh@367 -- # return 0 00:26:34.853 06:56:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.853 06:56:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.853 --rc genhtml_branch_coverage=1 00:26:34.853 --rc genhtml_function_coverage=1 00:26:34.853 --rc genhtml_legend=1 00:26:34.853 --rc geninfo_all_blocks=1 00:26:34.853 --rc geninfo_unexecuted_blocks=1 00:26:34.853 00:26:34.853 ' 00:26:34.853 06:56:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.853 --rc genhtml_branch_coverage=1 00:26:34.853 --rc genhtml_function_coverage=1 00:26:34.853 --rc genhtml_legend=1 00:26:34.853 --rc geninfo_all_blocks=1 00:26:34.853 --rc geninfo_unexecuted_blocks=1 00:26:34.853 00:26:34.853 ' 00:26:34.853 06:56:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.853 --rc genhtml_branch_coverage=1 00:26:34.853 --rc genhtml_function_coverage=1 00:26:34.853 --rc genhtml_legend=1 00:26:34.853 --rc geninfo_all_blocks=1 00:26:34.853 --rc geninfo_unexecuted_blocks=1 00:26:34.853 00:26:34.853 ' 00:26:34.853 06:56:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.853 --rc genhtml_branch_coverage=1 00:26:34.853 --rc genhtml_function_coverage=1 00:26:34.853 --rc genhtml_legend=1 00:26:34.853 --rc geninfo_all_blocks=1 00:26:34.853 --rc geninfo_unexecuted_blocks=1 00:26:34.853 00:26:34.853 ' 00:26:34.853 06:56:48 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.853 06:56:48 -- nvmf/common.sh@7 -- # uname -s 00:26:34.853 06:56:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.853 06:56:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.853 06:56:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.853 06:56:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.853 06:56:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.853 06:56:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.853 06:56:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.853 06:56:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.853 06:56:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.853 06:56:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:34.853 06:56:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:26:34.853 06:56:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.853 06:56:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.853 06:56:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:34.853 06:56:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.853 06:56:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.853 06:56:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.853 06:56:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.853 06:56:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.853 06:56:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.853 06:56:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.853 06:56:48 -- paths/export.sh@5 -- # export PATH 00:26:34.853 06:56:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.853 06:56:48 -- nvmf/common.sh@46 -- # : 0 00:26:34.853 06:56:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:34.853 06:56:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:34.853 06:56:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:34.853 06:56:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.853 06:56:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.853 06:56:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:34.853 06:56:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:34.853 06:56:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:34.853 06:56:48 -- target/dif.sh@15 -- # NULL_META=16 00:26:34.853 06:56:48 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:34.853 06:56:48 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:34.853 06:56:48 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:34.853 06:56:48 -- target/dif.sh@135 -- # nvmftestinit 00:26:34.853 06:56:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:34.853 06:56:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.853 06:56:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:34.853 06:56:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:34.853 06:56:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:34.853 06:56:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.853 06:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:34.853 06:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.853 06:56:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:34.853 06:56:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:34.853 06:56:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.853 06:56:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.853 06:56:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:34.853 06:56:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:34.853 06:56:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:34.853 06:56:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:34.853 06:56:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:34.853 06:56:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.853 06:56:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:34.853 06:56:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:34.853 06:56:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:34.853 06:56:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:34.853 06:56:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:34.853 06:56:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:34.853 Cannot find device "nvmf_tgt_br" 00:26:34.853 06:56:48 -- nvmf/common.sh@154 -- # true 00:26:34.853 06:56:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.853 Cannot find device "nvmf_tgt_br2" 00:26:34.853 06:56:48 -- nvmf/common.sh@155 -- # true 00:26:34.853 06:56:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:34.853 06:56:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:34.853 Cannot find device "nvmf_tgt_br" 00:26:34.853 06:56:48 -- nvmf/common.sh@157 -- # true 00:26:34.853 06:56:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:34.853 Cannot find device "nvmf_tgt_br2" 00:26:34.853 06:56:48 -- nvmf/common.sh@158 -- # true 00:26:34.853 06:56:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:34.853 06:56:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:34.853 06:56:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.853 06:56:48 -- nvmf/common.sh@161 -- # true 00:26:34.853 06:56:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.853 06:56:48 -- nvmf/common.sh@162 -- # true 00:26:34.854 06:56:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:34.854 06:56:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:35.112 06:56:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:35.112 06:56:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:35.112 06:56:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:35.112 06:56:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:35.112 06:56:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:35.112 06:56:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:35.112 06:56:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:35.112 06:56:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:35.112 06:56:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:35.112 06:56:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:35.112 06:56:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:35.112 06:56:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:35.112 06:56:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:35.112 06:56:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:35.112 06:56:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:35.112 06:56:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:35.112 06:56:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:35.112 06:56:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:35.112 06:56:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:35.112 06:56:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:35.112 06:56:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:35.112 06:56:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:35.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:26:35.112 00:26:35.112 --- 10.0.0.2 ping statistics --- 00:26:35.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.112 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:35.112 06:56:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:35.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:35.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:26:35.112 00:26:35.112 --- 10.0.0.3 ping statistics --- 00:26:35.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.112 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:35.112 06:56:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:35.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:26:35.112 00:26:35.112 --- 10.0.0.1 ping statistics --- 00:26:35.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.112 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:35.112 06:56:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.112 06:56:49 -- nvmf/common.sh@421 -- # return 0 00:26:35.112 06:56:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:35.112 06:56:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:35.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.628 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:35.628 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:35.628 06:56:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.628 06:56:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:35.628 06:56:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:35.628 06:56:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.628 06:56:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:35.628 06:56:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:35.628 06:56:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:35.628 06:56:49 -- target/dif.sh@137 -- # nvmfappstart 00:26:35.628 06:56:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:35.628 06:56:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:35.628 06:56:49 -- common/autotest_common.sh@10 -- # set +x 00:26:35.628 06:56:49 -- nvmf/common.sh@469 -- # nvmfpid=91674 00:26:35.628 06:56:49 -- nvmf/common.sh@470 -- # waitforlisten 91674 00:26:35.628 06:56:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:35.628 06:56:49 -- common/autotest_common.sh@829 -- # '[' -z 91674 ']' 00:26:35.628 06:56:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.628 06:56:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.628 06:56:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.628 06:56:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.628 06:56:49 -- common/autotest_common.sh@10 -- # set +x 00:26:35.628 [2024-12-14 06:56:49.504364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:35.628 [2024-12-14 06:56:49.505073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.886 [2024-12-14 06:56:49.646804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.886 [2024-12-14 06:56:49.759453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:35.886 [2024-12-14 06:56:49.759677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.886 [2024-12-14 06:56:49.759696] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.886 [2024-12-14 06:56:49.759709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.886 [2024-12-14 06:56:49.759759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.452 06:56:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.452 06:56:50 -- common/autotest_common.sh@862 -- # return 0 00:26:36.452 06:56:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:36.452 06:56:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.452 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.452 06:56:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.452 06:56:50 -- target/dif.sh@139 -- # create_transport 00:26:36.452 06:56:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:36.452 06:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.452 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.452 [2024-12-14 06:56:50.428654] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.452 06:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.452 06:56:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:36.452 06:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:36.452 06:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.452 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.710 ************************************ 00:26:36.710 START TEST fio_dif_1_default 00:26:36.710 ************************************ 00:26:36.710 06:56:50 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:26:36.710 06:56:50 -- target/dif.sh@86 -- # create_subsystems 0 00:26:36.710 06:56:50 -- target/dif.sh@28 -- # local sub 00:26:36.710 06:56:50 -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.710 06:56:50 -- target/dif.sh@31 -- # create_subsystem 0 00:26:36.710 06:56:50 -- target/dif.sh@18 -- # local sub_id=0 00:26:36.710 06:56:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:36.710 06:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.710 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.710 bdev_null0 00:26:36.710 06:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.710 06:56:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:36.710 06:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.710 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.710 06:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.710 06:56:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:36.710 06:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.710 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.710 06:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.710 06:56:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:36.710 06:56:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.710 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:36.710 [2024-12-14 06:56:50.472786] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.710 06:56:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.710 06:56:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:36.710 06:56:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:36.710 06:56:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:36.710 06:56:50 -- nvmf/common.sh@520 -- # config=() 00:26:36.710 06:56:50 -- nvmf/common.sh@520 -- # local subsystem config 00:26:36.710 06:56:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.710 06:56:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:36.710 06:56:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.710 06:56:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:36.710 { 00:26:36.710 "params": { 00:26:36.710 "name": "Nvme$subsystem", 00:26:36.710 "trtype": "$TEST_TRANSPORT", 00:26:36.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.710 "adrfam": "ipv4", 00:26:36.710 "trsvcid": "$NVMF_PORT", 00:26:36.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.710 "hdgst": ${hdgst:-false}, 00:26:36.710 "ddgst": ${ddgst:-false} 00:26:36.710 }, 00:26:36.710 "method": "bdev_nvme_attach_controller" 00:26:36.710 } 00:26:36.710 EOF 00:26:36.710 )") 00:26:36.710 06:56:50 -- target/dif.sh@82 -- # gen_fio_conf 00:26:36.710 06:56:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:36.710 06:56:50 -- target/dif.sh@54 -- # local file 00:26:36.710 06:56:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.710 06:56:50 -- target/dif.sh@56 -- # cat 00:26:36.710 06:56:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:36.710 06:56:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.710 06:56:50 -- common/autotest_common.sh@1330 -- # shift 00:26:36.710 06:56:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:36.710 06:56:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.710 06:56:50 -- nvmf/common.sh@542 -- # cat 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:36.710 06:56:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:36.710 06:56:50 -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.710 06:56:50 -- nvmf/common.sh@544 -- # jq . 00:26:36.710 06:56:50 -- nvmf/common.sh@545 -- # IFS=, 00:26:36.710 06:56:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:36.710 "params": { 00:26:36.710 "name": "Nvme0", 00:26:36.710 "trtype": "tcp", 00:26:36.710 "traddr": "10.0.0.2", 00:26:36.710 "adrfam": "ipv4", 00:26:36.710 "trsvcid": "4420", 00:26:36.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.710 "hdgst": false, 00:26:36.710 "ddgst": false 00:26:36.710 }, 00:26:36.710 "method": "bdev_nvme_attach_controller" 00:26:36.710 }' 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:36.710 06:56:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:36.710 06:56:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:36.710 06:56:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:36.710 06:56:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:36.710 06:56:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:36.710 06:56:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.968 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:36.968 fio-3.35 00:26:36.968 Starting 1 thread 00:26:37.226 [2024-12-14 06:56:51.177986] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:37.226 [2024-12-14 06:56:51.178771] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:49.480 00:26:49.480 filename0: (groupid=0, jobs=1): err= 0: pid=91760: Sat Dec 14 06:57:01 2024 00:26:49.480 read: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(101MiB/10001msec) 00:26:49.480 slat (nsec): min=5803, max=80086, avg=7502.38, stdev=2876.34 00:26:49.480 clat (usec): min=365, max=42476, avg=1518.55, stdev=6560.09 00:26:49.480 lat (usec): min=371, max=42487, avg=1526.06, stdev=6560.20 00:26:49.480 clat percentiles (usec): 00:26:49.480 | 1.00th=[ 375], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 396], 00:26:49.480 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:26:49.480 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 490], 95.00th=[ 529], 00:26:49.480 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:26:49.480 | 99.99th=[42206] 00:26:49.480 bw ( KiB/s): min= 3456, max=16224, per=98.65%, avg=10243.37, stdev=2765.42, samples=19 00:26:49.480 iops : min= 864, max= 4056, avg=2560.84, stdev=691.36, samples=19 00:26:49.480 lat (usec) : 500=92.23%, 750=5.07%, 1000=0.02% 00:26:49.480 lat (msec) : 50=2.68% 00:26:49.480 cpu : usr=90.30%, sys=8.38%, ctx=24, majf=0, minf=0 00:26:49.480 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.480 issued rwts: total=25960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.480 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:49.480 00:26:49.480 Run status group 0 (all jobs): 00:26:49.480 READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=101MiB (106MB), run=10001-10001msec 00:26:49.480 06:57:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:49.480 06:57:01 -- target/dif.sh@43 -- # local sub 00:26:49.480 06:57:01 -- target/dif.sh@45 -- # for sub in "$@" 00:26:49.480 06:57:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:49.480 06:57:01 -- target/dif.sh@36 -- # local sub_id=0 00:26:49.480 06:57:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 00:26:49.480 real 0m11.164s 00:26:49.480 user 0m9.803s 00:26:49.480 sys 0m1.148s 00:26:49.480 06:57:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 ************************************ 00:26:49.480 END TEST fio_dif_1_default 00:26:49.480 ************************************ 00:26:49.480 06:57:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:49.480 06:57:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:49.480 06:57:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 ************************************ 00:26:49.480 START TEST fio_dif_1_multi_subsystems 00:26:49.480 ************************************ 00:26:49.480 06:57:01 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:49.480 06:57:01 -- target/dif.sh@92 -- # local files=1 00:26:49.480 06:57:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:49.480 06:57:01 -- target/dif.sh@28 -- # local sub 00:26:49.480 06:57:01 -- target/dif.sh@30 -- # for sub in "$@" 00:26:49.480 06:57:01 -- target/dif.sh@31 -- # create_subsystem 0 00:26:49.480 06:57:01 -- target/dif.sh@18 -- # local sub_id=0 00:26:49.480 06:57:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 bdev_null0 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 [2024-12-14 06:57:01.695495] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@30 -- # for sub in "$@" 00:26:49.480 06:57:01 -- target/dif.sh@31 -- # create_subsystem 1 00:26:49.480 06:57:01 -- target/dif.sh@18 -- # local sub_id=1 00:26:49.480 06:57:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 bdev_null1 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.480 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.480 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:49.480 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.480 06:57:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:49.480 06:57:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:49.480 06:57:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:49.480 06:57:01 -- nvmf/common.sh@520 -- # config=() 00:26:49.480 06:57:01 -- nvmf/common.sh@520 -- # local subsystem config 00:26:49.480 06:57:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.480 06:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.480 06:57:01 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.480 06:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.480 { 00:26:49.480 "params": { 00:26:49.480 "name": "Nvme$subsystem", 00:26:49.480 "trtype": "$TEST_TRANSPORT", 00:26:49.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.480 "adrfam": "ipv4", 00:26:49.480 "trsvcid": "$NVMF_PORT", 00:26:49.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.480 "hdgst": ${hdgst:-false}, 00:26:49.480 "ddgst": ${ddgst:-false} 00:26:49.480 }, 00:26:49.480 "method": "bdev_nvme_attach_controller" 00:26:49.480 } 00:26:49.480 EOF 00:26:49.480 )") 00:26:49.480 06:57:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:49.480 06:57:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:49.480 06:57:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:49.480 06:57:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:49.480 06:57:01 -- common/autotest_common.sh@1330 -- # shift 00:26:49.480 06:57:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:49.480 06:57:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.480 06:57:01 -- nvmf/common.sh@542 -- # cat 00:26:49.480 06:57:01 -- target/dif.sh@82 -- # gen_fio_conf 00:26:49.480 06:57:01 -- target/dif.sh@54 -- # local file 00:26:49.480 06:57:01 -- target/dif.sh@56 -- # cat 00:26:49.480 06:57:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:49.480 06:57:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:49.480 06:57:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:49.480 06:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.480 06:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.480 { 00:26:49.480 "params": { 00:26:49.480 "name": "Nvme$subsystem", 00:26:49.480 "trtype": "$TEST_TRANSPORT", 00:26:49.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.480 "adrfam": "ipv4", 00:26:49.480 "trsvcid": "$NVMF_PORT", 00:26:49.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.480 "hdgst": ${hdgst:-false}, 00:26:49.480 "ddgst": ${ddgst:-false} 00:26:49.480 }, 00:26:49.480 "method": "bdev_nvme_attach_controller" 00:26:49.480 } 00:26:49.480 EOF 00:26:49.480 )") 00:26:49.480 06:57:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:49.481 06:57:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:49.481 06:57:01 -- target/dif.sh@73 -- # cat 00:26:49.481 06:57:01 -- nvmf/common.sh@542 -- # cat 00:26:49.481 06:57:01 -- target/dif.sh@72 -- # (( file++ )) 00:26:49.481 06:57:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:49.481 06:57:01 -- nvmf/common.sh@544 -- # jq . 00:26:49.481 06:57:01 -- nvmf/common.sh@545 -- # IFS=, 00:26:49.481 06:57:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:49.481 "params": { 00:26:49.481 "name": "Nvme0", 00:26:49.481 "trtype": "tcp", 00:26:49.481 "traddr": "10.0.0.2", 00:26:49.481 "adrfam": "ipv4", 00:26:49.481 "trsvcid": "4420", 00:26:49.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:49.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:49.481 "hdgst": false, 00:26:49.481 "ddgst": false 00:26:49.481 }, 00:26:49.481 "method": "bdev_nvme_attach_controller" 00:26:49.481 },{ 00:26:49.481 "params": { 00:26:49.481 "name": "Nvme1", 00:26:49.481 "trtype": "tcp", 00:26:49.481 "traddr": "10.0.0.2", 00:26:49.481 "adrfam": "ipv4", 00:26:49.481 "trsvcid": "4420", 00:26:49.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.481 "hdgst": false, 00:26:49.481 "ddgst": false 00:26:49.481 }, 00:26:49.481 "method": "bdev_nvme_attach_controller" 00:26:49.481 }' 00:26:49.481 06:57:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:49.481 06:57:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:49.481 06:57:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.481 06:57:01 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:49.481 06:57:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:49.481 06:57:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:49.481 06:57:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:49.481 06:57:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:49.481 06:57:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:49.481 06:57:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:49.481 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:49.481 fio-3.35 00:26:49.481 Starting 2 threads 00:26:49.481 [2024-12-14 06:57:02.529724] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:49.481 [2024-12-14 06:57:02.529817] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:59.459 00:26:59.459 filename0: (groupid=0, jobs=1): err= 0: pid=91920: Sat Dec 14 06:57:12 2024 00:26:59.459 read: IOPS=292, BW=1171KiB/s (1199kB/s)(11.5MiB/10031msec) 00:26:59.459 slat (nsec): min=5837, max=61817, avg=8954.98, stdev=4765.16 00:26:59.459 clat (usec): min=382, max=41983, avg=13637.82, stdev=18967.17 00:26:59.459 lat (usec): min=388, max=41993, avg=13646.78, stdev=18967.54 00:26:59.459 clat percentiles (usec): 00:26:59.459 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 429], 00:26:59.459 | 30.00th=[ 441], 40.00th=[ 457], 50.00th=[ 478], 60.00th=[ 510], 00:26:59.459 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:59.459 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:59.459 | 99.99th=[42206] 00:26:59.459 bw ( KiB/s): min= 608, max= 2176, per=49.45%, avg=1172.95, stdev=522.67, samples=20 00:26:59.459 iops : min= 152, max= 544, avg=293.20, stdev=130.61, samples=20 00:26:59.459 lat (usec) : 500=58.11%, 750=8.45%, 1000=0.75% 00:26:59.459 lat (msec) : 4=0.14%, 50=32.56% 00:26:59.459 cpu : usr=96.77%, sys=2.70%, ctx=29, majf=0, minf=0 00:26:59.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.459 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:59.459 filename1: (groupid=0, jobs=1): err= 0: pid=91921: Sat Dec 14 06:57:12 2024 00:26:59.459 read: IOPS=299, BW=1200KiB/s (1229kB/s)(11.8MiB/10028msec) 00:26:59.459 slat (nsec): min=5899, max=54300, avg=8899.08, stdev=4519.46 00:26:59.459 clat (usec): min=379, max=42479, avg=13307.02, stdev=18836.28 00:26:59.459 lat (usec): min=386, max=42492, avg=13315.92, stdev=18836.65 00:26:59.459 clat percentiles (usec): 00:26:59.459 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:26:59.459 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 465], 60.00th=[ 490], 00:26:59.459 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:59.459 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:26:59.459 | 99.99th=[42730] 00:26:59.459 bw ( KiB/s): min= 640, max= 3424, per=50.67%, avg=1201.60, stdev=738.70, samples=20 00:26:59.459 iops : min= 160, max= 856, avg=300.40, stdev=184.68, samples=20 00:26:59.459 lat (usec) : 500=61.80%, 750=5.55%, 1000=0.73% 00:26:59.459 lat (msec) : 4=0.13%, 50=31.78% 00:26:59.459 cpu : usr=96.68%, sys=2.84%, ctx=19, majf=0, minf=0 00:26:59.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.459 issued rwts: total=3008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:59.459 00:26:59.459 Run status group 0 (all jobs): 00:26:59.459 READ: bw=2370KiB/s (2427kB/s), 1171KiB/s-1200KiB/s (1199kB/s-1229kB/s), io=23.2MiB (24.3MB), run=10028-10031msec 00:26:59.459 06:57:12 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:59.459 06:57:12 -- target/dif.sh@43 -- # local sub 00:26:59.459 06:57:12 -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.459 06:57:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.459 06:57:12 -- target/dif.sh@36 -- # local sub_id=0 00:26:59.459 06:57:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.459 06:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.459 06:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:12 -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.459 06:57:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:59.459 06:57:12 -- target/dif.sh@36 -- # local sub_id=1 00:26:59.459 06:57:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.459 06:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:59.459 06:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 00:26:59.459 real 0m11.340s 00:26:59.459 user 0m20.284s 00:26:59.459 sys 0m0.863s 00:26:59.459 06:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 ************************************ 00:26:59.459 END TEST fio_dif_1_multi_subsystems 00:26:59.459 ************************************ 00:26:59.459 06:57:13 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:59.459 06:57:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:59.459 06:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 ************************************ 00:26:59.459 START TEST fio_dif_rand_params 00:26:59.459 ************************************ 00:26:59.459 06:57:13 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:59.459 06:57:13 -- target/dif.sh@100 -- # local NULL_DIF 00:26:59.459 06:57:13 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:59.459 06:57:13 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:59.459 06:57:13 -- target/dif.sh@103 -- # bs=128k 00:26:59.459 06:57:13 -- target/dif.sh@103 -- # numjobs=3 00:26:59.459 06:57:13 -- target/dif.sh@103 -- # iodepth=3 00:26:59.459 06:57:13 -- target/dif.sh@103 -- # runtime=5 00:26:59.459 06:57:13 -- target/dif.sh@105 -- # create_subsystems 0 00:26:59.459 06:57:13 -- target/dif.sh@28 -- # local sub 00:26:59.459 06:57:13 -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.459 06:57:13 -- target/dif.sh@31 -- # create_subsystem 0 00:26:59.459 06:57:13 -- target/dif.sh@18 -- # local sub_id=0 00:26:59.459 06:57:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:59.459 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 bdev_null0 00:26:59.459 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:59.459 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:59.459 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:59.459 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.459 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.459 [2024-12-14 06:57:13.100457] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.459 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.459 06:57:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:59.459 06:57:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:59.459 06:57:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:59.459 06:57:13 -- nvmf/common.sh@520 -- # config=() 00:26:59.459 06:57:13 -- nvmf/common.sh@520 -- # local subsystem config 00:26:59.459 06:57:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.459 06:57:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.459 06:57:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.459 { 00:26:59.459 "params": { 00:26:59.459 "name": "Nvme$subsystem", 00:26:59.459 "trtype": "$TEST_TRANSPORT", 00:26:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.459 "adrfam": "ipv4", 00:26:59.459 "trsvcid": "$NVMF_PORT", 00:26:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.459 "hdgst": ${hdgst:-false}, 00:26:59.459 "ddgst": ${ddgst:-false} 00:26:59.459 }, 00:26:59.459 "method": "bdev_nvme_attach_controller" 00:26:59.459 } 00:26:59.459 EOF 00:26:59.459 )") 00:26:59.459 06:57:13 -- target/dif.sh@82 -- # gen_fio_conf 00:26:59.459 06:57:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.459 06:57:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:59.459 06:57:13 -- target/dif.sh@54 -- # local file 00:26:59.459 06:57:13 -- target/dif.sh@56 -- # cat 00:26:59.459 06:57:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:59.459 06:57:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:59.459 06:57:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.459 06:57:13 -- common/autotest_common.sh@1330 -- # shift 00:26:59.459 06:57:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:59.459 06:57:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.459 06:57:13 -- nvmf/common.sh@542 -- # cat 00:26:59.459 06:57:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.459 06:57:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:59.459 06:57:13 -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.459 06:57:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:59.460 06:57:13 -- nvmf/common.sh@544 -- # jq . 00:26:59.460 06:57:13 -- nvmf/common.sh@545 -- # IFS=, 00:26:59.460 06:57:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:59.460 "params": { 00:26:59.460 "name": "Nvme0", 00:26:59.460 "trtype": "tcp", 00:26:59.460 "traddr": "10.0.0.2", 00:26:59.460 "adrfam": "ipv4", 00:26:59.460 "trsvcid": "4420", 00:26:59.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.460 "hdgst": false, 00:26:59.460 "ddgst": false 00:26:59.460 }, 00:26:59.460 "method": "bdev_nvme_attach_controller" 00:26:59.460 }' 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:59.460 06:57:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:59.460 06:57:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:59.460 06:57:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:59.460 06:57:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:59.460 06:57:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:59.460 06:57:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.460 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:59.460 ... 00:26:59.460 fio-3.35 00:26:59.460 Starting 3 threads 00:27:00.026 [2024-12-14 06:57:13.796796] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:00.026 [2024-12-14 06:57:13.796905] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:05.297 00:27:05.297 filename0: (groupid=0, jobs=1): err= 0: pid=92080: Sat Dec 14 06:57:18 2024 00:27:05.297 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(136MiB/5025msec) 00:27:05.297 slat (nsec): min=6140, max=82949, avg=14696.46, stdev=8195.00 00:27:05.297 clat (usec): min=4429, max=58374, avg=13869.81, stdev=11820.26 00:27:05.297 lat (usec): min=4442, max=58411, avg=13884.50, stdev=11821.09 00:27:05.297 clat percentiles (usec): 00:27:05.297 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 7701], 00:27:05.297 | 30.00th=[ 8356], 40.00th=[10421], 50.00th=[11338], 60.00th=[11994], 00:27:05.297 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13960], 95.00th=[51643], 00:27:05.297 | 99.00th=[53740], 99.50th=[54264], 99.90th=[57410], 99.95th=[58459], 00:27:05.297 | 99.99th=[58459] 00:27:05.297 bw ( KiB/s): min=19968, max=34048, per=29.97%, avg=27847.11, stdev=5419.67, samples=9 00:27:05.297 iops : min= 156, max= 266, avg=217.56, stdev=42.34, samples=9 00:27:05.297 lat (msec) : 10=38.06%, 20=53.36%, 50=2.21%, 100=6.36% 00:27:05.297 cpu : usr=93.67%, sys=4.66%, ctx=6, majf=0, minf=0 00:27:05.297 IO depths : 1=5.0%, 2=95.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.297 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.297 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.297 filename0: (groupid=0, jobs=1): err= 0: pid=92081: Sat Dec 14 06:57:18 2024 00:27:05.297 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5040msec) 00:27:05.297 slat (nsec): min=6463, max=58800, avg=16104.25, stdev=8298.05 00:27:05.297 clat (usec): min=5368, max=53161, avg=13021.37, stdev=11775.74 00:27:05.298 lat (usec): min=5389, max=53188, avg=13037.47, stdev=11775.67 00:27:05.298 clat percentiles (usec): 00:27:05.298 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7635], 00:27:05.298 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:27:05.298 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11994], 95.00th=[50070], 00:27:05.298 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:27:05.298 | 99.99th=[53216] 00:27:05.298 bw ( KiB/s): min=20736, max=42496, per=31.84%, avg=29586.20, stdev=6576.79, samples=10 00:27:05.298 iops : min= 162, max= 332, avg=231.10, stdev=51.33, samples=10 00:27:05.298 lat (msec) : 10=50.56%, 20=40.38%, 50=4.06%, 100=5.00% 00:27:05.298 cpu : usr=94.66%, sys=4.09%, ctx=6, majf=0, minf=0 00:27:05.298 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.298 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.298 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.298 filename0: (groupid=0, jobs=1): err= 0: pid=92082: Sat Dec 14 06:57:18 2024 00:27:05.298 read: IOPS=282, BW=35.4MiB/s (37.1MB/s)(177MiB/5003msec) 00:27:05.298 slat (usec): min=5, max=149, avg=14.35, stdev= 8.70 00:27:05.298 clat (usec): min=3036, max=47492, avg=10577.89, stdev=3918.33 00:27:05.298 lat (usec): min=3062, max=47501, avg=10592.24, stdev=3919.24 00:27:05.298 clat percentiles (usec): 00:27:05.298 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 7963], 00:27:05.298 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[11863], 00:27:05.298 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15008], 95.00th=[15401], 00:27:05.298 | 99.00th=[16450], 99.50th=[17171], 99.90th=[45876], 99.95th=[47449], 00:27:05.298 | 99.99th=[47449] 00:27:05.298 bw ( KiB/s): min=30720, max=41472, per=39.21%, avg=36437.33, stdev=3667.60, samples=9 00:27:05.298 iops : min= 240, max= 324, avg=284.67, stdev=28.65, samples=9 00:27:05.298 lat (msec) : 4=0.71%, 10=50.04%, 20=49.05%, 50=0.21% 00:27:05.298 cpu : usr=92.08%, sys=5.44%, ctx=96, majf=0, minf=0 00:27:05.298 IO depths : 1=19.2%, 2=80.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.298 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.298 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.298 00:27:05.298 Run status group 0 (all jobs): 00:27:05.298 READ: bw=90.7MiB/s (95.2MB/s), 27.0MiB/s-35.4MiB/s (28.3MB/s-37.1MB/s), io=457MiB (480MB), run=5003-5040msec 00:27:05.298 06:57:19 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:05.298 06:57:19 -- target/dif.sh@43 -- # local sub 00:27:05.298 06:57:19 -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.298 06:57:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:05.298 06:57:19 -- target/dif.sh@36 -- # local sub_id=0 00:27:05.298 06:57:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.298 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.298 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.298 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.298 06:57:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:05.298 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.298 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.298 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # bs=4k 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # numjobs=8 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # iodepth=16 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # runtime= 00:27:05.298 06:57:19 -- target/dif.sh@109 -- # files=2 00:27:05.298 06:57:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:05.298 06:57:19 -- target/dif.sh@28 -- # local sub 00:27:05.298 06:57:19 -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.298 06:57:19 -- target/dif.sh@31 -- # create_subsystem 0 00:27:05.298 06:57:19 -- target/dif.sh@18 -- # local sub_id=0 00:27:05.298 06:57:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:05.298 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.298 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.298 bdev_null0 00:27:05.298 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.298 06:57:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:05.298 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.298 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 [2024-12-14 06:57:19.306978] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.557 06:57:19 -- target/dif.sh@31 -- # create_subsystem 1 00:27:05.557 06:57:19 -- target/dif.sh@18 -- # local sub_id=1 00:27:05.557 06:57:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 bdev_null1 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.557 06:57:19 -- target/dif.sh@31 -- # create_subsystem 2 00:27:05.557 06:57:19 -- target/dif.sh@18 -- # local sub_id=2 00:27:05.557 06:57:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 bdev_null2 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:05.557 06:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.557 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.557 06:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.557 06:57:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:05.557 06:57:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:05.557 06:57:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:05.557 06:57:19 -- nvmf/common.sh@520 -- # config=() 00:27:05.557 06:57:19 -- nvmf/common.sh@520 -- # local subsystem config 00:27:05.557 06:57:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:05.557 06:57:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.557 06:57:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:05.557 { 00:27:05.557 "params": { 00:27:05.557 "name": "Nvme$subsystem", 00:27:05.557 "trtype": "$TEST_TRANSPORT", 00:27:05.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.557 "adrfam": "ipv4", 00:27:05.557 "trsvcid": "$NVMF_PORT", 00:27:05.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.557 "hdgst": ${hdgst:-false}, 00:27:05.557 "ddgst": ${ddgst:-false} 00:27:05.557 }, 00:27:05.557 "method": "bdev_nvme_attach_controller" 00:27:05.557 } 00:27:05.557 EOF 00:27:05.557 )") 00:27:05.557 06:57:19 -- target/dif.sh@82 -- # gen_fio_conf 00:27:05.557 06:57:19 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.557 06:57:19 -- target/dif.sh@54 -- # local file 00:27:05.557 06:57:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:05.557 06:57:19 -- target/dif.sh@56 -- # cat 00:27:05.557 06:57:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.557 06:57:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:05.557 06:57:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.557 06:57:19 -- common/autotest_common.sh@1330 -- # shift 00:27:05.557 06:57:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:05.557 06:57:19 -- nvmf/common.sh@542 -- # cat 00:27:05.557 06:57:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.557 06:57:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.557 06:57:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:05.557 06:57:19 -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.557 06:57:19 -- target/dif.sh@73 -- # cat 00:27:05.557 06:57:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:05.557 06:57:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:05.557 06:57:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:05.557 06:57:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:05.557 { 00:27:05.557 "params": { 00:27:05.557 "name": "Nvme$subsystem", 00:27:05.557 "trtype": "$TEST_TRANSPORT", 00:27:05.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.557 "adrfam": "ipv4", 00:27:05.557 "trsvcid": "$NVMF_PORT", 00:27:05.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.558 "hdgst": ${hdgst:-false}, 00:27:05.558 "ddgst": ${ddgst:-false} 00:27:05.558 }, 00:27:05.558 "method": "bdev_nvme_attach_controller" 00:27:05.558 } 00:27:05.558 EOF 00:27:05.558 )") 00:27:05.558 06:57:19 -- nvmf/common.sh@542 -- # cat 00:27:05.558 06:57:19 -- target/dif.sh@72 -- # (( file++ )) 00:27:05.558 06:57:19 -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.558 06:57:19 -- target/dif.sh@73 -- # cat 00:27:05.558 06:57:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:05.558 06:57:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:05.558 { 00:27:05.558 "params": { 00:27:05.558 "name": "Nvme$subsystem", 00:27:05.558 "trtype": "$TEST_TRANSPORT", 00:27:05.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.558 "adrfam": "ipv4", 00:27:05.558 "trsvcid": "$NVMF_PORT", 00:27:05.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.558 "hdgst": ${hdgst:-false}, 00:27:05.558 "ddgst": ${ddgst:-false} 00:27:05.558 }, 00:27:05.558 "method": "bdev_nvme_attach_controller" 00:27:05.558 } 00:27:05.558 EOF 00:27:05.558 )") 00:27:05.558 06:57:19 -- target/dif.sh@72 -- # (( file++ )) 00:27:05.558 06:57:19 -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.558 06:57:19 -- nvmf/common.sh@542 -- # cat 00:27:05.558 06:57:19 -- nvmf/common.sh@544 -- # jq . 00:27:05.558 06:57:19 -- nvmf/common.sh@545 -- # IFS=, 00:27:05.558 06:57:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:05.558 "params": { 00:27:05.558 "name": "Nvme0", 00:27:05.558 "trtype": "tcp", 00:27:05.558 "traddr": "10.0.0.2", 00:27:05.558 "adrfam": "ipv4", 00:27:05.558 "trsvcid": "4420", 00:27:05.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.558 "hdgst": false, 00:27:05.558 "ddgst": false 00:27:05.558 }, 00:27:05.558 "method": "bdev_nvme_attach_controller" 00:27:05.558 },{ 00:27:05.558 "params": { 00:27:05.558 "name": "Nvme1", 00:27:05.558 "trtype": "tcp", 00:27:05.558 "traddr": "10.0.0.2", 00:27:05.558 "adrfam": "ipv4", 00:27:05.558 "trsvcid": "4420", 00:27:05.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:05.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:05.558 "hdgst": false, 00:27:05.558 "ddgst": false 00:27:05.558 }, 00:27:05.558 "method": "bdev_nvme_attach_controller" 00:27:05.558 },{ 00:27:05.558 "params": { 00:27:05.558 "name": "Nvme2", 00:27:05.558 "trtype": "tcp", 00:27:05.558 "traddr": "10.0.0.2", 00:27:05.558 "adrfam": "ipv4", 00:27:05.558 "trsvcid": "4420", 00:27:05.558 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:05.558 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:05.558 "hdgst": false, 00:27:05.558 "ddgst": false 00:27:05.558 }, 00:27:05.558 "method": "bdev_nvme_attach_controller" 00:27:05.558 }' 00:27:05.558 06:57:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:05.558 06:57:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:05.558 06:57:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.558 06:57:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:05.558 06:57:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.558 06:57:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:05.558 06:57:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:05.558 06:57:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:05.558 06:57:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:05.558 06:57:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.816 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:05.816 ... 00:27:05.816 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:05.816 ... 00:27:05.816 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:05.816 ... 00:27:05.816 fio-3.35 00:27:05.816 Starting 24 threads 00:27:06.383 [2024-12-14 06:57:20.298939] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:06.383 [2024-12-14 06:57:20.299012] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:18.614 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92186: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=216, BW=865KiB/s (886kB/s)(8696KiB/10052msec) 00:27:18.614 slat (usec): min=6, max=8044, avg=20.56, stdev=243.34 00:27:18.614 clat (msec): min=26, max=157, avg=73.85, stdev=23.58 00:27:18.614 lat (msec): min=26, max=157, avg=73.87, stdev=23.59 00:27:18.614 clat percentiles (msec): 00:27:18.614 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 52], 00:27:18.614 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:27:18.614 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 116], 00:27:18.614 | 99.00th=[ 134], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:27:18.614 | 99.99th=[ 159] 00:27:18.614 bw ( KiB/s): min= 640, max= 1216, per=4.07%, avg=863.15, stdev=155.96, samples=20 00:27:18.614 iops : min= 160, max= 304, avg=215.75, stdev=38.98, samples=20 00:27:18.614 lat (msec) : 50=18.54%, 100=67.34%, 250=14.12% 00:27:18.614 cpu : usr=32.74%, sys=0.44%, ctx=919, majf=0, minf=9 00:27:18.614 IO depths : 1=1.1%, 2=2.9%, 4=10.4%, 8=73.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:18.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92187: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=232, BW=929KiB/s (951kB/s)(9336KiB/10051msec) 00:27:18.614 slat (usec): min=6, max=9031, avg=29.86, stdev=343.72 00:27:18.614 clat (msec): min=20, max=173, avg=68.64, stdev=24.89 00:27:18.614 lat (msec): min=20, max=173, avg=68.67, stdev=24.89 00:27:18.614 clat percentiles (msec): 00:27:18.614 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:27:18.614 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 72], 00:27:18.614 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 103], 95.00th=[ 113], 00:27:18.614 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 174], 00:27:18.614 | 99.99th=[ 174] 00:27:18.614 bw ( KiB/s): min= 688, max= 1408, per=4.38%, avg=929.60, stdev=191.33, samples=20 00:27:18.614 iops : min= 172, max= 352, avg=232.40, stdev=47.83, samples=20 00:27:18.614 lat (msec) : 50=24.46%, 100=65.30%, 250=10.24% 00:27:18.614 cpu : usr=37.34%, sys=0.66%, ctx=1180, majf=0, minf=9 00:27:18.614 IO depths : 1=0.8%, 2=2.1%, 4=8.4%, 8=75.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:27:18.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 complete : 0=0.0%, 4=89.8%, 8=6.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92188: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=207, BW=829KiB/s (849kB/s)(8340KiB/10058msec) 00:27:18.614 slat (usec): min=5, max=8045, avg=29.35, stdev=341.31 00:27:18.614 clat (msec): min=20, max=143, avg=76.98, stdev=22.96 00:27:18.614 lat (msec): min=20, max=143, avg=77.01, stdev=22.96 00:27:18.614 clat percentiles (msec): 00:27:18.614 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 59], 00:27:18.614 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 82], 00:27:18.614 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 114], 00:27:18.614 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:18.614 | 99.99th=[ 144] 00:27:18.614 bw ( KiB/s): min= 592, max= 1432, per=3.90%, avg=828.30, stdev=163.71, samples=20 00:27:18.614 iops : min= 148, max= 358, avg=207.05, stdev=40.92, samples=20 00:27:18.614 lat (msec) : 50=14.24%, 100=70.36%, 250=15.40% 00:27:18.614 cpu : usr=37.46%, sys=0.43%, ctx=1106, majf=0, minf=9 00:27:18.614 IO depths : 1=2.0%, 2=4.4%, 4=13.1%, 8=69.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:18.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92189: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=237, BW=949KiB/s (972kB/s)(9548KiB/10056msec) 00:27:18.614 slat (usec): min=4, max=8001, avg=16.93, stdev=163.77 00:27:18.614 clat (msec): min=8, max=153, avg=67.17, stdev=23.70 00:27:18.614 lat (msec): min=8, max=153, avg=67.19, stdev=23.70 00:27:18.614 clat percentiles (msec): 00:27:18.614 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:27:18.614 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 71], 00:27:18.614 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 112], 00:27:18.614 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 155], 00:27:18.614 | 99.99th=[ 155] 00:27:18.614 bw ( KiB/s): min= 640, max= 1376, per=4.47%, avg=948.40, stdev=171.20, samples=20 00:27:18.614 iops : min= 160, max= 344, avg=237.10, stdev=42.80, samples=20 00:27:18.614 lat (msec) : 10=0.67%, 20=0.67%, 50=26.69%, 100=60.75%, 250=11.23% 00:27:18.614 cpu : usr=36.55%, sys=0.55%, ctx=1020, majf=0, minf=9 00:27:18.614 IO depths : 1=0.8%, 2=1.9%, 4=9.2%, 8=75.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:27:18.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92190: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=271, BW=1086KiB/s (1112kB/s)(10.7MiB/10047msec) 00:27:18.614 slat (usec): min=3, max=4082, avg=20.37, stdev=168.91 00:27:18.614 clat (usec): min=1444, max=148535, avg=58731.44, stdev=26858.29 00:27:18.614 lat (usec): min=1451, max=148542, avg=58751.81, stdev=26860.24 00:27:18.614 clat percentiles (usec): 00:27:18.614 | 1.00th=[ 1549], 5.00th=[ 2180], 10.00th=[ 22414], 20.00th=[ 41681], 00:27:18.614 | 30.00th=[ 46924], 40.00th=[ 53740], 50.00th=[ 58983], 60.00th=[ 64226], 00:27:18.614 | 70.00th=[ 71828], 80.00th=[ 79168], 90.00th=[ 88605], 95.00th=[101188], 00:27:18.614 | 99.00th=[133694], 99.50th=[137364], 99.90th=[147850], 99.95th=[147850], 00:27:18.614 | 99.99th=[147850] 00:27:18.614 bw ( KiB/s): min= 768, max= 2846, per=5.12%, avg=1085.80, stdev=464.58, samples=20 00:27:18.614 iops : min= 192, max= 711, avg=271.40, stdev=116.06, samples=20 00:27:18.614 lat (msec) : 2=4.69%, 4=1.17%, 10=2.35%, 20=0.66%, 50=26.94% 00:27:18.614 lat (msec) : 100=58.61%, 250=5.57% 00:27:18.614 cpu : usr=43.72%, sys=0.83%, ctx=1651, majf=0, minf=0 00:27:18.614 IO depths : 1=2.0%, 2=4.4%, 4=13.2%, 8=69.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:18.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.614 issued rwts: total=2728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.614 filename0: (groupid=0, jobs=1): err= 0: pid=92191: Sat Dec 14 06:57:30 2024 00:27:18.614 read: IOPS=230, BW=921KiB/s (943kB/s)(9256KiB/10055msec) 00:27:18.614 slat (usec): min=6, max=8037, avg=24.22, stdev=288.82 00:27:18.614 clat (msec): min=27, max=155, avg=69.12, stdev=22.54 00:27:18.614 lat (msec): min=27, max=155, avg=69.14, stdev=22.55 00:27:18.614 clat percentiles (msec): 00:27:18.614 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:27:18.614 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:27:18.614 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 110], 00:27:18.614 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:27:18.614 | 99.99th=[ 155] 00:27:18.614 bw ( KiB/s): min= 712, max= 1280, per=4.35%, avg=923.45, stdev=160.68, samples=20 00:27:18.614 iops : min= 178, max= 320, avg=230.85, stdev=40.15, samples=20 00:27:18.614 lat (msec) : 50=24.55%, 100=64.82%, 250=10.63% 00:27:18.615 cpu : usr=34.40%, sys=0.62%, ctx=916, majf=0, minf=9 00:27:18.615 IO depths : 1=0.5%, 2=1.4%, 4=8.6%, 8=75.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename0: (groupid=0, jobs=1): err= 0: pid=92192: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=213, BW=854KiB/s (875kB/s)(8584KiB/10048msec) 00:27:18.615 slat (usec): min=5, max=8188, avg=17.81, stdev=176.67 00:27:18.615 clat (msec): min=19, max=160, avg=74.80, stdev=22.28 00:27:18.615 lat (msec): min=19, max=160, avg=74.82, stdev=22.28 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:27:18.615 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 80], 00:27:18.615 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 112], 00:27:18.615 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:27:18.615 | 99.99th=[ 161] 00:27:18.615 bw ( KiB/s): min= 720, max= 1328, per=4.02%, avg=852.00, stdev=134.43, samples=20 00:27:18.615 iops : min= 180, max= 332, avg=213.00, stdev=33.61, samples=20 00:27:18.615 lat (msec) : 20=0.33%, 50=14.82%, 100=70.74%, 250=14.12% 00:27:18.615 cpu : usr=40.57%, sys=0.64%, ctx=1351, majf=0, minf=9 00:27:18.615 IO depths : 1=1.7%, 2=3.8%, 4=12.1%, 8=70.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename0: (groupid=0, jobs=1): err= 0: pid=92193: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=222, BW=890KiB/s (912kB/s)(8928KiB/10026msec) 00:27:18.615 slat (usec): min=5, max=8044, avg=20.10, stdev=240.46 00:27:18.615 clat (msec): min=20, max=176, avg=71.76, stdev=25.70 00:27:18.615 lat (msec): min=20, max=176, avg=71.78, stdev=25.70 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 48], 00:27:18.615 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 75], 00:27:18.615 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 117], 00:27:18.615 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 178], 00:27:18.615 | 99.99th=[ 178] 00:27:18.615 bw ( KiB/s): min= 512, max= 1616, per=4.18%, avg=886.45, stdev=215.72, samples=20 00:27:18.615 iops : min= 128, max= 404, avg=221.60, stdev=53.94, samples=20 00:27:18.615 lat (msec) : 50=22.58%, 100=63.80%, 250=13.62% 00:27:18.615 cpu : usr=32.55%, sys=0.61%, ctx=904, majf=0, minf=9 00:27:18.615 IO depths : 1=1.1%, 2=2.3%, 4=9.3%, 8=74.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92194: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=249, BW=1000KiB/s (1023kB/s)(9.82MiB/10061msec) 00:27:18.615 slat (usec): min=5, max=8012, avg=17.44, stdev=173.70 00:27:18.615 clat (msec): min=12, max=165, avg=63.85, stdev=21.85 00:27:18.615 lat (msec): min=12, max=165, avg=63.87, stdev=21.84 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 47], 00:27:18.615 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 69], 00:27:18.615 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 104], 00:27:18.615 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 167], 99.95th=[ 167], 00:27:18.615 | 99.99th=[ 167] 00:27:18.615 bw ( KiB/s): min= 720, max= 1616, per=4.71%, avg=998.85, stdev=208.94, samples=20 00:27:18.615 iops : min= 180, max= 404, avg=249.65, stdev=52.28, samples=20 00:27:18.615 lat (msec) : 20=0.28%, 50=27.45%, 100=66.03%, 250=6.25% 00:27:18.615 cpu : usr=40.68%, sys=0.61%, ctx=1127, majf=0, minf=9 00:27:18.615 IO depths : 1=0.4%, 2=1.0%, 4=7.1%, 8=78.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92195: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=202, BW=812KiB/s (831kB/s)(8128KiB/10016msec) 00:27:18.615 slat (usec): min=6, max=8041, avg=26.27, stdev=281.38 00:27:18.615 clat (msec): min=20, max=166, avg=78.69, stdev=24.36 00:27:18.615 lat (msec): min=20, max=166, avg=78.72, stdev=24.36 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 61], 00:27:18.615 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:27:18.615 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 118], 00:27:18.615 | 99.00th=[ 142], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:27:18.615 | 99.99th=[ 167] 00:27:18.615 bw ( KiB/s): min= 640, max= 1520, per=3.82%, avg=809.79, stdev=193.96, samples=19 00:27:18.615 iops : min= 160, max= 380, avg=202.42, stdev=48.50, samples=19 00:27:18.615 lat (msec) : 50=12.99%, 100=68.16%, 250=18.85% 00:27:18.615 cpu : usr=36.42%, sys=0.49%, ctx=1054, majf=0, minf=9 00:27:18.615 IO depths : 1=1.8%, 2=4.3%, 4=13.5%, 8=68.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92196: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=198, BW=796KiB/s (815kB/s)(7984KiB/10033msec) 00:27:18.615 slat (usec): min=4, max=4032, avg=17.71, stdev=113.16 00:27:18.615 clat (msec): min=22, max=167, avg=80.21, stdev=25.69 00:27:18.615 lat (msec): min=22, max=167, avg=80.23, stdev=25.69 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 63], 00:27:18.615 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:27:18.615 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 113], 95.00th=[ 131], 00:27:18.615 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:27:18.615 | 99.99th=[ 167] 00:27:18.615 bw ( KiB/s): min= 507, max= 1280, per=3.75%, avg=795.58, stdev=164.67, samples=19 00:27:18.615 iops : min= 126, max= 320, avg=198.84, stdev=41.24, samples=19 00:27:18.615 lat (msec) : 50=11.97%, 100=67.54%, 250=20.49% 00:27:18.615 cpu : usr=42.75%, sys=0.67%, ctx=1352, majf=0, minf=9 00:27:18.615 IO depths : 1=2.7%, 2=6.3%, 4=16.6%, 8=64.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92197: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=200, BW=800KiB/s (819kB/s)(8024KiB/10029msec) 00:27:18.615 slat (usec): min=4, max=4058, avg=15.73, stdev=90.77 00:27:18.615 clat (msec): min=25, max=171, avg=79.85, stdev=24.89 00:27:18.615 lat (msec): min=25, max=171, avg=79.86, stdev=24.89 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 61], 00:27:18.615 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:27:18.615 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 124], 00:27:18.615 | 99.00th=[ 144], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 171], 00:27:18.615 | 99.99th=[ 171] 00:27:18.615 bw ( KiB/s): min= 592, max= 1280, per=3.76%, avg=798.37, stdev=141.01, samples=19 00:27:18.615 iops : min= 148, max= 320, avg=199.58, stdev=35.24, samples=19 00:27:18.615 lat (msec) : 50=9.47%, 100=69.04%, 250=21.49% 00:27:18.615 cpu : usr=41.91%, sys=0.47%, ctx=1146, majf=0, minf=9 00:27:18.615 IO depths : 1=2.8%, 2=6.1%, 4=16.1%, 8=65.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92198: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10038msec) 00:27:18.615 slat (usec): min=6, max=8004, avg=19.07, stdev=201.45 00:27:18.615 clat (msec): min=22, max=158, avg=81.14, stdev=24.01 00:27:18.615 lat (msec): min=22, max=158, avg=81.16, stdev=24.01 00:27:18.615 clat percentiles (msec): 00:27:18.615 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 62], 00:27:18.615 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 88], 00:27:18.615 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 111], 95.00th=[ 116], 00:27:18.615 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:27:18.615 | 99.99th=[ 159] 00:27:18.615 bw ( KiB/s): min= 568, max= 1256, per=3.70%, avg=784.00, stdev=142.16, samples=20 00:27:18.615 iops : min= 142, max= 314, avg=196.00, stdev=35.54, samples=20 00:27:18.615 lat (msec) : 50=10.53%, 100=70.29%, 250=19.18% 00:27:18.615 cpu : usr=36.86%, sys=0.46%, ctx=1018, majf=0, minf=9 00:27:18.615 IO depths : 1=1.6%, 2=3.5%, 4=11.3%, 8=71.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:18.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 complete : 0=0.0%, 4=90.6%, 8=5.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.615 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.615 filename1: (groupid=0, jobs=1): err= 0: pid=92199: Sat Dec 14 06:57:30 2024 00:27:18.615 read: IOPS=215, BW=861KiB/s (882kB/s)(8652KiB/10043msec) 00:27:18.615 slat (usec): min=4, max=8047, avg=21.03, stdev=244.20 00:27:18.615 clat (msec): min=18, max=173, avg=74.10, stdev=27.37 00:27:18.615 lat (msec): min=18, max=173, avg=74.12, stdev=27.36 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 45], 20.00th=[ 50], 00:27:18.616 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:27:18.616 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 121], 00:27:18.616 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 174], 00:27:18.616 | 99.99th=[ 174] 00:27:18.616 bw ( KiB/s): min= 640, max= 1760, per=4.06%, avg=860.60, stdev=237.95, samples=20 00:27:18.616 iops : min= 160, max= 440, avg=215.15, stdev=59.49, samples=20 00:27:18.616 lat (msec) : 20=0.28%, 50=20.20%, 100=64.12%, 250=15.40% 00:27:18.616 cpu : usr=34.13%, sys=0.53%, ctx=925, majf=0, minf=9 00:27:18.616 IO depths : 1=1.0%, 2=2.4%, 4=10.3%, 8=73.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename1: (groupid=0, jobs=1): err= 0: pid=92200: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=188, BW=756KiB/s (774kB/s)(7580KiB/10031msec) 00:27:18.616 slat (usec): min=4, max=8044, avg=28.19, stdev=323.85 00:27:18.616 clat (msec): min=22, max=154, avg=84.43, stdev=26.39 00:27:18.616 lat (msec): min=22, max=154, avg=84.46, stdev=26.39 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 49], 20.00th=[ 64], 00:27:18.616 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 93], 00:27:18.616 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 133], 00:27:18.616 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:27:18.616 | 99.99th=[ 155] 00:27:18.616 bw ( KiB/s): min= 640, max= 1432, per=3.57%, avg=758.21, stdev=173.30, samples=19 00:27:18.616 iops : min= 160, max= 358, avg=189.53, stdev=43.32, samples=19 00:27:18.616 lat (msec) : 50=10.08%, 100=63.17%, 250=26.75% 00:27:18.616 cpu : usr=32.75%, sys=0.43%, ctx=908, majf=0, minf=9 00:27:18.616 IO depths : 1=2.0%, 2=4.7%, 4=14.5%, 8=67.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename1: (groupid=0, jobs=1): err= 0: pid=92201: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.79MiB/10049msec) 00:27:18.616 slat (usec): min=4, max=5978, avg=21.74, stdev=195.44 00:27:18.616 clat (msec): min=20, max=130, avg=63.89, stdev=20.55 00:27:18.616 lat (msec): min=20, max=130, avg=63.91, stdev=20.55 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:27:18.616 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 65], 00:27:18.616 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 106], 00:27:18.616 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:27:18.616 | 99.99th=[ 131] 00:27:18.616 bw ( KiB/s): min= 638, max= 1328, per=4.71%, avg=998.70, stdev=172.20, samples=20 00:27:18.616 iops : min= 159, max= 332, avg=249.65, stdev=43.10, samples=20 00:27:18.616 lat (msec) : 50=28.68%, 100=62.19%, 250=9.13% 00:27:18.616 cpu : usr=46.04%, sys=0.66%, ctx=1163, majf=0, minf=9 00:27:18.616 IO depths : 1=1.4%, 2=3.4%, 4=11.4%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename2: (groupid=0, jobs=1): err= 0: pid=92202: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=212, BW=849KiB/s (870kB/s)(8536KiB/10049msec) 00:27:18.616 slat (nsec): min=6584, max=93383, avg=13172.63, stdev=7406.81 00:27:18.616 clat (msec): min=23, max=163, avg=75.05, stdev=23.34 00:27:18.616 lat (msec): min=23, max=163, avg=75.06, stdev=23.34 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:27:18.616 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:27:18.616 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 115], 00:27:18.616 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:27:18.616 | 99.99th=[ 165] 00:27:18.616 bw ( KiB/s): min= 600, max= 1280, per=4.01%, avg=851.20, stdev=139.54, samples=20 00:27:18.616 iops : min= 150, max= 320, avg=212.80, stdev=34.89, samples=20 00:27:18.616 lat (msec) : 50=18.88%, 100=67.62%, 250=13.50% 00:27:18.616 cpu : usr=34.11%, sys=0.57%, ctx=902, majf=0, minf=9 00:27:18.616 IO depths : 1=1.0%, 2=2.4%, 4=9.4%, 8=74.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.1%, 8=5.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename2: (groupid=0, jobs=1): err= 0: pid=92203: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=244, BW=978KiB/s (1001kB/s)(9856KiB/10079msec) 00:27:18.616 slat (usec): min=4, max=8055, avg=21.87, stdev=242.69 00:27:18.616 clat (msec): min=7, max=130, avg=65.19, stdev=22.70 00:27:18.616 lat (msec): min=7, max=130, avg=65.21, stdev=22.70 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 47], 00:27:18.616 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 71], 00:27:18.616 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:27:18.616 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 131], 00:27:18.616 | 99.99th=[ 131] 00:27:18.616 bw ( KiB/s): min= 696, max= 1480, per=4.62%, avg=979.20, stdev=204.25, samples=20 00:27:18.616 iops : min= 174, max= 370, avg=244.80, stdev=51.06, samples=20 00:27:18.616 lat (msec) : 10=1.62%, 20=0.32%, 50=25.08%, 100=64.89%, 250=8.08% 00:27:18.616 cpu : usr=40.31%, sys=0.69%, ctx=1116, majf=0, minf=9 00:27:18.616 IO depths : 1=1.5%, 2=3.3%, 4=10.8%, 8=72.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename2: (groupid=0, jobs=1): err= 0: pid=92204: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=226, BW=905KiB/s (926kB/s)(9100KiB/10059msec) 00:27:18.616 slat (usec): min=6, max=8020, avg=30.97, stdev=325.28 00:27:18.616 clat (msec): min=25, max=147, avg=70.51, stdev=23.33 00:27:18.616 lat (msec): min=25, max=147, avg=70.54, stdev=23.34 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:27:18.616 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 73], 00:27:18.616 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 114], 00:27:18.616 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:27:18.616 | 99.99th=[ 148] 00:27:18.616 bw ( KiB/s): min= 752, max= 1232, per=4.26%, avg=903.55, stdev=168.61, samples=20 00:27:18.616 iops : min= 188, max= 308, avg=225.85, stdev=42.12, samples=20 00:27:18.616 lat (msec) : 50=23.91%, 100=62.64%, 250=13.45% 00:27:18.616 cpu : usr=38.33%, sys=0.51%, ctx=1044, majf=0, minf=9 00:27:18.616 IO depths : 1=0.9%, 2=2.3%, 4=10.5%, 8=74.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=89.9%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename2: (groupid=0, jobs=1): err= 0: pid=92205: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=205, BW=822KiB/s (842kB/s)(8248KiB/10034msec) 00:27:18.616 slat (usec): min=6, max=8021, avg=27.48, stdev=317.50 00:27:18.616 clat (msec): min=23, max=178, avg=77.59, stdev=23.59 00:27:18.616 lat (msec): min=23, max=178, avg=77.61, stdev=23.61 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 59], 00:27:18.616 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 82], 00:27:18.616 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 116], 00:27:18.616 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 178], 99.95th=[ 178], 00:27:18.616 | 99.99th=[ 178] 00:27:18.616 bw ( KiB/s): min= 592, max= 1352, per=3.84%, avg=815.68, stdev=163.51, samples=19 00:27:18.616 iops : min= 148, max= 338, avg=203.89, stdev=40.91, samples=19 00:27:18.616 lat (msec) : 50=15.13%, 100=68.23%, 250=16.63% 00:27:18.616 cpu : usr=34.71%, sys=0.49%, ctx=967, majf=0, minf=9 00:27:18.616 IO depths : 1=1.6%, 2=3.9%, 4=12.2%, 8=70.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.616 filename2: (groupid=0, jobs=1): err= 0: pid=92206: Sat Dec 14 06:57:30 2024 00:27:18.616 read: IOPS=247, BW=989KiB/s (1013kB/s)(9956KiB/10062msec) 00:27:18.616 slat (usec): min=5, max=3993, avg=15.54, stdev=82.58 00:27:18.616 clat (msec): min=9, max=151, avg=64.40, stdev=22.64 00:27:18.616 lat (msec): min=9, max=151, avg=64.41, stdev=22.64 00:27:18.616 clat percentiles (msec): 00:27:18.616 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:27:18.616 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 63], 60.00th=[ 69], 00:27:18.616 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 109], 00:27:18.616 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 153], 99.95th=[ 153], 00:27:18.616 | 99.99th=[ 153] 00:27:18.616 bw ( KiB/s): min= 640, max= 1376, per=4.66%, avg=988.85, stdev=185.51, samples=20 00:27:18.616 iops : min= 160, max= 344, avg=247.15, stdev=46.43, samples=20 00:27:18.616 lat (msec) : 10=0.48%, 20=1.85%, 50=28.65%, 100=59.30%, 250=9.72% 00:27:18.616 cpu : usr=43.65%, sys=0.69%, ctx=1375, majf=0, minf=9 00:27:18.616 IO depths : 1=1.6%, 2=3.4%, 4=10.8%, 8=72.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:18.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.616 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.617 filename2: (groupid=0, jobs=1): err= 0: pid=92207: Sat Dec 14 06:57:30 2024 00:27:18.617 read: IOPS=224, BW=900KiB/s (921kB/s)(9040KiB/10047msec) 00:27:18.617 slat (usec): min=4, max=8034, avg=27.08, stdev=336.11 00:27:18.617 clat (msec): min=21, max=130, avg=70.96, stdev=22.52 00:27:18.617 lat (msec): min=21, max=130, avg=70.99, stdev=22.54 00:27:18.617 clat percentiles (msec): 00:27:18.617 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 49], 00:27:18.617 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:27:18.617 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 110], 00:27:18.617 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 129], 99.95th=[ 129], 00:27:18.617 | 99.99th=[ 131] 00:27:18.617 bw ( KiB/s): min= 696, max= 1536, per=4.23%, avg=897.60, stdev=177.25, samples=20 00:27:18.617 iops : min= 174, max= 384, avg=224.40, stdev=44.31, samples=20 00:27:18.617 lat (msec) : 50=21.77%, 100=67.12%, 250=11.11% 00:27:18.617 cpu : usr=34.32%, sys=0.50%, ctx=910, majf=0, minf=9 00:27:18.617 IO depths : 1=1.2%, 2=3.0%, 4=10.2%, 8=73.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:18.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.617 filename2: (groupid=0, jobs=1): err= 0: pid=92208: Sat Dec 14 06:57:30 2024 00:27:18.617 read: IOPS=210, BW=842KiB/s (863kB/s)(8460KiB/10043msec) 00:27:18.617 slat (usec): min=4, max=10098, avg=33.21, stdev=325.51 00:27:18.617 clat (msec): min=20, max=144, avg=75.72, stdev=21.54 00:27:18.617 lat (msec): min=20, max=144, avg=75.75, stdev=21.54 00:27:18.617 clat percentiles (msec): 00:27:18.617 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 59], 00:27:18.617 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:27:18.617 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 111], 00:27:18.617 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:18.617 | 99.99th=[ 144] 00:27:18.617 bw ( KiB/s): min= 640, max= 1328, per=3.96%, avg=839.45, stdev=150.16, samples=20 00:27:18.617 iops : min= 160, max= 332, avg=209.85, stdev=37.55, samples=20 00:27:18.617 lat (msec) : 50=11.82%, 100=74.66%, 250=13.52% 00:27:18.617 cpu : usr=40.80%, sys=0.64%, ctx=1488, majf=0, minf=9 00:27:18.617 IO depths : 1=2.2%, 2=4.9%, 4=13.5%, 8=68.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:18.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.617 filename2: (groupid=0, jobs=1): err= 0: pid=92209: Sat Dec 14 06:57:30 2024 00:27:18.617 read: IOPS=213, BW=853KiB/s (873kB/s)(8560KiB/10039msec) 00:27:18.617 slat (usec): min=4, max=8035, avg=21.50, stdev=208.81 00:27:18.617 clat (msec): min=25, max=170, avg=74.91, stdev=22.77 00:27:18.617 lat (msec): min=25, max=170, avg=74.93, stdev=22.77 00:27:18.617 clat percentiles (msec): 00:27:18.617 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 56], 00:27:18.617 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 75], 00:27:18.617 | 70.00th=[ 83], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 114], 00:27:18.617 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 171], 99.95th=[ 171], 00:27:18.617 | 99.99th=[ 171] 00:27:18.617 bw ( KiB/s): min= 634, max= 1200, per=4.00%, avg=849.30, stdev=150.37, samples=20 00:27:18.617 iops : min= 158, max= 300, avg=212.30, stdev=37.63, samples=20 00:27:18.617 lat (msec) : 50=12.71%, 100=69.21%, 250=18.08% 00:27:18.617 cpu : usr=41.33%, sys=0.63%, ctx=1166, majf=0, minf=9 00:27:18.617 IO depths : 1=2.1%, 2=4.4%, 4=12.6%, 8=70.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:27:18.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.617 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:18.617 00:27:18.617 Run status group 0 (all jobs): 00:27:18.617 READ: bw=20.7MiB/s (21.7MB/s), 756KiB/s-1086KiB/s (774kB/s-1112kB/s), io=209MiB (219MB), run=10016-10079msec 00:27:18.617 06:57:30 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:18.617 06:57:30 -- target/dif.sh@43 -- # local sub 00:27:18.617 06:57:30 -- target/dif.sh@45 -- # for sub in "$@" 00:27:18.617 06:57:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:18.617 06:57:30 -- target/dif.sh@36 -- # local sub_id=0 00:27:18.617 06:57:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@45 -- # for sub in "$@" 00:27:18.617 06:57:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:18.617 06:57:30 -- target/dif.sh@36 -- # local sub_id=1 00:27:18.617 06:57:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@45 -- # for sub in "$@" 00:27:18.617 06:57:30 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:18.617 06:57:30 -- target/dif.sh@36 -- # local sub_id=2 00:27:18.617 06:57:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # numjobs=2 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # iodepth=8 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # runtime=5 00:27:18.617 06:57:30 -- target/dif.sh@115 -- # files=1 00:27:18.617 06:57:30 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:18.617 06:57:30 -- target/dif.sh@28 -- # local sub 00:27:18.617 06:57:30 -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.617 06:57:30 -- target/dif.sh@31 -- # create_subsystem 0 00:27:18.617 06:57:30 -- target/dif.sh@18 -- # local sub_id=0 00:27:18.617 06:57:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 bdev_null0 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:18.617 06:57:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 [2024-12-14 06:57:30.997808] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.617 06:57:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:31 -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.617 06:57:31 -- target/dif.sh@31 -- # create_subsystem 1 00:27:18.617 06:57:31 -- target/dif.sh@18 -- # local sub_id=1 00:27:18.617 06:57:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:18.617 06:57:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 bdev_null1 00:27:18.617 06:57:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:18.617 06:57:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:18.617 06:57:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.617 06:57:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.617 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 06:57:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.617 06:57:31 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:18.617 06:57:31 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:18.617 06:57:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:18.617 06:57:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.617 06:57:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.617 06:57:31 -- target/dif.sh@82 -- # gen_fio_conf 00:27:18.617 06:57:31 -- target/dif.sh@54 -- # local file 00:27:18.617 06:57:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:18.617 06:57:31 -- nvmf/common.sh@520 -- # config=() 00:27:18.617 06:57:31 -- target/dif.sh@56 -- # cat 00:27:18.618 06:57:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.618 06:57:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:18.618 06:57:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.618 06:57:31 -- common/autotest_common.sh@1330 -- # shift 00:27:18.618 06:57:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:18.618 06:57:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.618 06:57:31 -- nvmf/common.sh@520 -- # local subsystem config 00:27:18.618 06:57:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:18.618 06:57:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:18.618 { 00:27:18.618 "params": { 00:27:18.618 "name": "Nvme$subsystem", 00:27:18.618 "trtype": "$TEST_TRANSPORT", 00:27:18.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.618 "adrfam": "ipv4", 00:27:18.618 "trsvcid": "$NVMF_PORT", 00:27:18.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.618 "hdgst": ${hdgst:-false}, 00:27:18.618 "ddgst": ${ddgst:-false} 00:27:18.618 }, 00:27:18.618 "method": "bdev_nvme_attach_controller" 00:27:18.618 } 00:27:18.618 EOF 00:27:18.618 )") 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:18.618 06:57:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.618 06:57:31 -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.618 06:57:31 -- target/dif.sh@73 -- # cat 00:27:18.618 06:57:31 -- nvmf/common.sh@542 -- # cat 00:27:18.618 06:57:31 -- target/dif.sh@72 -- # (( file++ )) 00:27:18.618 06:57:31 -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.618 06:57:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:18.618 06:57:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:18.618 { 00:27:18.618 "params": { 00:27:18.618 "name": "Nvme$subsystem", 00:27:18.618 "trtype": "$TEST_TRANSPORT", 00:27:18.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.618 "adrfam": "ipv4", 00:27:18.618 "trsvcid": "$NVMF_PORT", 00:27:18.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.618 "hdgst": ${hdgst:-false}, 00:27:18.618 "ddgst": ${ddgst:-false} 00:27:18.618 }, 00:27:18.618 "method": "bdev_nvme_attach_controller" 00:27:18.618 } 00:27:18.618 EOF 00:27:18.618 )") 00:27:18.618 06:57:31 -- nvmf/common.sh@542 -- # cat 00:27:18.618 06:57:31 -- nvmf/common.sh@544 -- # jq . 00:27:18.618 06:57:31 -- nvmf/common.sh@545 -- # IFS=, 00:27:18.618 06:57:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:18.618 "params": { 00:27:18.618 "name": "Nvme0", 00:27:18.618 "trtype": "tcp", 00:27:18.618 "traddr": "10.0.0.2", 00:27:18.618 "adrfam": "ipv4", 00:27:18.618 "trsvcid": "4420", 00:27:18.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.618 "hdgst": false, 00:27:18.618 "ddgst": false 00:27:18.618 }, 00:27:18.618 "method": "bdev_nvme_attach_controller" 00:27:18.618 },{ 00:27:18.618 "params": { 00:27:18.618 "name": "Nvme1", 00:27:18.618 "trtype": "tcp", 00:27:18.618 "traddr": "10.0.0.2", 00:27:18.618 "adrfam": "ipv4", 00:27:18.618 "trsvcid": "4420", 00:27:18.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.618 "hdgst": false, 00:27:18.618 "ddgst": false 00:27:18.618 }, 00:27:18.618 "method": "bdev_nvme_attach_controller" 00:27:18.618 }' 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.618 06:57:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.618 06:57:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:18.618 06:57:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.618 06:57:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.618 06:57:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:18.618 06:57:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.618 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:18.618 ... 00:27:18.618 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:18.618 ... 00:27:18.618 fio-3.35 00:27:18.618 Starting 4 threads 00:27:18.618 [2024-12-14 06:57:31.846700] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:18.618 [2024-12-14 06:57:31.847529] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:23.887 00:27:23.887 filename0: (groupid=0, jobs=1): err= 0: pid=92341: Sat Dec 14 06:57:36 2024 00:27:23.887 read: IOPS=2083, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:27:23.887 slat (nsec): min=6349, max=94596, avg=18314.21, stdev=10005.34 00:27:23.887 clat (usec): min=1546, max=6846, avg=3750.43, stdev=270.83 00:27:23.887 lat (usec): min=1566, max=6853, avg=3768.74, stdev=271.09 00:27:23.887 clat percentiles (usec): 00:27:23.887 | 1.00th=[ 3064], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3589], 00:27:23.887 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:27:23.887 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4015], 95.00th=[ 4113], 00:27:23.887 | 99.00th=[ 4490], 99.50th=[ 5080], 99.90th=[ 5997], 99.95th=[ 6194], 00:27:23.887 | 99.99th=[ 6456] 00:27:23.887 bw ( KiB/s): min=16256, max=17648, per=25.01%, avg=16752.00, stdev=450.14, samples=9 00:27:23.887 iops : min= 2032, max= 2206, avg=2094.00, stdev=56.27, samples=9 00:27:23.887 lat (msec) : 2=0.04%, 4=89.54%, 10=10.42% 00:27:23.887 cpu : usr=95.06%, sys=3.60%, ctx=35, majf=0, minf=0 00:27:23.887 IO depths : 1=8.7%, 2=24.8%, 4=50.1%, 8=16.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 issued rwts: total=10424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:23.887 filename0: (groupid=0, jobs=1): err= 0: pid=92342: Sat Dec 14 06:57:36 2024 00:27:23.887 read: IOPS=2084, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5003msec) 00:27:23.887 slat (nsec): min=5983, max=95411, avg=19536.42, stdev=11235.22 00:27:23.887 clat (usec): min=950, max=6174, avg=3748.03, stdev=264.82 00:27:23.887 lat (usec): min=957, max=6194, avg=3767.57, stdev=265.09 00:27:23.887 clat percentiles (usec): 00:27:23.887 | 1.00th=[ 3097], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3589], 00:27:23.887 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:27:23.887 | 70.00th=[ 3818], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4146], 00:27:23.887 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5604], 99.95th=[ 5932], 00:27:23.887 | 99.99th=[ 6063] 00:27:23.887 bw ( KiB/s): min=16256, max=17571, per=25.00%, avg=16748.78, stdev=430.69, samples=9 00:27:23.887 iops : min= 2032, max= 2196, avg=2093.56, stdev=53.75, samples=9 00:27:23.887 lat (usec) : 1000=0.03% 00:27:23.887 lat (msec) : 2=0.08%, 4=88.79%, 10=11.11% 00:27:23.887 cpu : usr=94.80%, sys=3.74%, ctx=31, majf=0, minf=9 00:27:23.887 IO depths : 1=8.3%, 2=21.9%, 4=53.0%, 8=16.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 issued rwts: total=10427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:23.887 filename1: (groupid=0, jobs=1): err= 0: pid=92343: Sat Dec 14 06:57:36 2024 00:27:23.887 read: IOPS=2088, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5003msec) 00:27:23.887 slat (nsec): min=6150, max=95345, avg=17229.69, stdev=11439.70 00:27:23.887 clat (usec): min=1113, max=5725, avg=3750.45, stdev=253.23 00:27:23.887 lat (usec): min=1119, max=5770, avg=3767.68, stdev=253.75 00:27:23.887 clat percentiles (usec): 00:27:23.887 | 1.00th=[ 3097], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3589], 00:27:23.887 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:27:23.887 | 70.00th=[ 3818], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4146], 00:27:23.887 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5407], 99.95th=[ 5407], 00:27:23.887 | 99.99th=[ 5669] 00:27:23.887 bw ( KiB/s): min=16256, max=17536, per=25.05%, avg=16778.67, stdev=414.00, samples=9 00:27:23.887 iops : min= 2032, max= 2192, avg=2097.33, stdev=51.75, samples=9 00:27:23.887 lat (msec) : 2=0.11%, 4=88.45%, 10=11.44% 00:27:23.887 cpu : usr=95.04%, sys=3.62%, ctx=7, majf=0, minf=10 00:27:23.887 IO depths : 1=9.3%, 2=20.8%, 4=54.1%, 8=15.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.887 issued rwts: total=10449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:23.888 filename1: (groupid=0, jobs=1): err= 0: pid=92344: Sat Dec 14 06:57:36 2024 00:27:23.888 read: IOPS=2116, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5002msec) 00:27:23.888 slat (usec): min=5, max=225, avg=11.25, stdev= 7.91 00:27:23.888 clat (usec): min=702, max=7212, avg=3729.93, stdev=379.83 00:27:23.888 lat (usec): min=709, max=7232, avg=3741.18, stdev=379.61 00:27:23.888 clat percentiles (usec): 00:27:23.888 | 1.00th=[ 1418], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3589], 00:27:23.888 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:23.888 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4146], 00:27:23.888 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5342], 00:27:23.888 | 99.99th=[ 7177] 00:27:23.888 bw ( KiB/s): min=16512, max=18048, per=25.42%, avg=17025.78, stdev=450.38, samples=9 00:27:23.888 iops : min= 2064, max= 2256, avg=2128.22, stdev=56.30, samples=9 00:27:23.888 lat (usec) : 750=0.03%, 1000=0.06% 00:27:23.888 lat (msec) : 2=1.29%, 4=87.06%, 10=11.56% 00:27:23.888 cpu : usr=94.36%, sys=4.02%, ctx=79, majf=0, minf=9 00:27:23.888 IO depths : 1=6.8%, 2=16.8%, 4=57.6%, 8=18.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.888 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.888 issued rwts: total=10589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:23.888 00:27:23.888 Run status group 0 (all jobs): 00:27:23.888 READ: bw=65.4MiB/s (68.6MB/s), 16.3MiB/s-16.5MiB/s (17.1MB/s-17.3MB/s), io=327MiB (343MB), run=5002-5003msec 00:27:23.888 06:57:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:23.888 06:57:37 -- target/dif.sh@43 -- # local sub 00:27:23.888 06:57:37 -- target/dif.sh@45 -- # for sub in "$@" 00:27:23.888 06:57:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:23.888 06:57:37 -- target/dif.sh@36 -- # local sub_id=0 00:27:23.888 06:57:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@45 -- # for sub in "$@" 00:27:23.888 06:57:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:23.888 06:57:37 -- target/dif.sh@36 -- # local sub_id=1 00:27:23.888 06:57:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 00:27:23.888 real 0m24.244s 00:27:23.888 user 2m7.894s 00:27:23.888 sys 0m3.878s 00:27:23.888 06:57:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 ************************************ 00:27:23.888 END TEST fio_dif_rand_params 00:27:23.888 ************************************ 00:27:23.888 06:57:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:23.888 06:57:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:23.888 06:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 ************************************ 00:27:23.888 START TEST fio_dif_digest 00:27:23.888 ************************************ 00:27:23.888 06:57:37 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:27:23.888 06:57:37 -- target/dif.sh@123 -- # local NULL_DIF 00:27:23.888 06:57:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:23.888 06:57:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:23.888 06:57:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:23.888 06:57:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:23.888 06:57:37 -- target/dif.sh@127 -- # numjobs=3 00:27:23.888 06:57:37 -- target/dif.sh@127 -- # iodepth=3 00:27:23.888 06:57:37 -- target/dif.sh@127 -- # runtime=10 00:27:23.888 06:57:37 -- target/dif.sh@128 -- # hdgst=true 00:27:23.888 06:57:37 -- target/dif.sh@128 -- # ddgst=true 00:27:23.888 06:57:37 -- target/dif.sh@130 -- # create_subsystems 0 00:27:23.888 06:57:37 -- target/dif.sh@28 -- # local sub 00:27:23.888 06:57:37 -- target/dif.sh@30 -- # for sub in "$@" 00:27:23.888 06:57:37 -- target/dif.sh@31 -- # create_subsystem 0 00:27:23.888 06:57:37 -- target/dif.sh@18 -- # local sub_id=0 00:27:23.888 06:57:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 bdev_null0 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.888 06:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.888 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.888 [2024-12-14 06:57:37.409453] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.888 06:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.888 06:57:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:23.888 06:57:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:23.888 06:57:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:23.888 06:57:37 -- nvmf/common.sh@520 -- # config=() 00:27:23.888 06:57:37 -- nvmf/common.sh@520 -- # local subsystem config 00:27:23.888 06:57:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.888 06:57:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:23.888 06:57:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.888 06:57:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:23.888 { 00:27:23.888 "params": { 00:27:23.888 "name": "Nvme$subsystem", 00:27:23.888 "trtype": "$TEST_TRANSPORT", 00:27:23.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.888 "adrfam": "ipv4", 00:27:23.888 "trsvcid": "$NVMF_PORT", 00:27:23.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.888 "hdgst": ${hdgst:-false}, 00:27:23.888 "ddgst": ${ddgst:-false} 00:27:23.888 }, 00:27:23.888 "method": "bdev_nvme_attach_controller" 00:27:23.888 } 00:27:23.888 EOF 00:27:23.888 )") 00:27:23.888 06:57:37 -- target/dif.sh@82 -- # gen_fio_conf 00:27:23.888 06:57:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:23.888 06:57:37 -- target/dif.sh@54 -- # local file 00:27:23.888 06:57:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:23.888 06:57:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:23.888 06:57:37 -- target/dif.sh@56 -- # cat 00:27:23.888 06:57:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:23.888 06:57:37 -- common/autotest_common.sh@1330 -- # shift 00:27:23.888 06:57:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:23.888 06:57:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.888 06:57:37 -- nvmf/common.sh@542 -- # cat 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:23.888 06:57:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:23.888 06:57:37 -- target/dif.sh@72 -- # (( file <= files )) 00:27:23.888 06:57:37 -- nvmf/common.sh@544 -- # jq . 00:27:23.888 06:57:37 -- nvmf/common.sh@545 -- # IFS=, 00:27:23.888 06:57:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:23.888 "params": { 00:27:23.888 "name": "Nvme0", 00:27:23.888 "trtype": "tcp", 00:27:23.888 "traddr": "10.0.0.2", 00:27:23.888 "adrfam": "ipv4", 00:27:23.888 "trsvcid": "4420", 00:27:23.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:23.888 "hdgst": true, 00:27:23.888 "ddgst": true 00:27:23.888 }, 00:27:23.888 "method": "bdev_nvme_attach_controller" 00:27:23.888 }' 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:23.888 06:57:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:23.888 06:57:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:23.888 06:57:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:23.888 06:57:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:23.888 06:57:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:23.888 06:57:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.888 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:23.888 ... 00:27:23.888 fio-3.35 00:27:23.888 Starting 3 threads 00:27:24.148 [2024-12-14 06:57:38.076205] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:24.148 [2024-12-14 06:57:38.076299] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:36.357 00:27:36.357 filename0: (groupid=0, jobs=1): err= 0: pid=92450: Sat Dec 14 06:57:48 2024 00:27:36.357 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(287MiB/10006msec) 00:27:36.357 slat (nsec): min=6507, max=63260, avg=14556.49, stdev=6006.89 00:27:36.357 clat (usec): min=5958, max=55360, avg=13037.73, stdev=8869.74 00:27:36.357 lat (usec): min=5988, max=55381, avg=13052.28, stdev=8869.97 00:27:36.357 clat percentiles (usec): 00:27:36.357 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:27:36.357 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:27:36.357 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13042], 95.00th=[20579], 00:27:36.357 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:27:36.357 | 99.99th=[55313] 00:27:36.357 bw ( KiB/s): min=20480, max=37120, per=32.90%, avg=29291.79, stdev=4594.01, samples=19 00:27:36.357 iops : min= 160, max= 290, avg=228.84, stdev=35.89, samples=19 00:27:36.357 lat (msec) : 10=15.83%, 20=79.12%, 50=0.22%, 100=4.83% 00:27:36.357 cpu : usr=95.03%, sys=3.65%, ctx=12, majf=0, minf=9 00:27:36.357 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:36.357 filename0: (groupid=0, jobs=1): err= 0: pid=92451: Sat Dec 14 06:57:48 2024 00:27:36.357 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10045msec) 00:27:36.357 slat (nsec): min=6468, max=65169, avg=19069.08, stdev=7004.14 00:27:36.357 clat (usec): min=8078, max=49447, avg=14064.70, stdev=3177.55 00:27:36.357 lat (usec): min=8096, max=49467, avg=14083.77, stdev=3178.60 00:27:36.357 clat percentiles (usec): 00:27:36.357 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10683], 00:27:36.357 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:27:36.357 | 70.00th=[15270], 80.00th=[15926], 90.00th=[17171], 95.00th=[17957], 00:27:36.357 | 99.00th=[23725], 99.50th=[24249], 99.90th=[29754], 99.95th=[45351], 00:27:36.357 | 99.99th=[49546] 00:27:36.357 bw ( KiB/s): min=23552, max=32768, per=30.68%, avg=27317.65, stdev=2217.91, samples=20 00:27:36.357 iops : min= 184, max= 256, avg=213.40, stdev=17.35, samples=20 00:27:36.357 lat (msec) : 10=16.95%, 20=80.66%, 50=2.39% 00:27:36.357 cpu : usr=94.51%, sys=4.03%, ctx=18, majf=0, minf=9 00:27:36.357 IO depths : 1=4.2%, 2=95.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:36.357 filename0: (groupid=0, jobs=1): err= 0: pid=92452: Sat Dec 14 06:57:48 2024 00:27:36.357 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(319MiB/10046msec) 00:27:36.357 slat (nsec): min=6286, max=56976, avg=14686.85, stdev=6627.09 00:27:36.357 clat (usec): min=6216, max=54962, avg=11768.42, stdev=3219.29 00:27:36.357 lat (usec): min=6226, max=54979, avg=11783.11, stdev=3218.86 00:27:36.357 clat percentiles (usec): 00:27:36.357 | 1.00th=[ 6915], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 9241], 00:27:36.357 | 30.00th=[11076], 40.00th=[11731], 50.00th=[11994], 60.00th=[12518], 00:27:36.357 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13829], 95.00th=[14484], 00:27:36.357 | 99.00th=[18220], 99.50th=[20579], 99.90th=[54264], 99.95th=[54264], 00:27:36.357 | 99.99th=[54789] 00:27:36.357 bw ( KiB/s): min=26368, max=39424, per=36.67%, avg=32649.75, stdev=2887.70, samples=20 00:27:36.357 iops : min= 206, max= 308, avg=255.05, stdev=22.58, samples=20 00:27:36.357 lat (msec) : 10=21.86%, 20=77.56%, 50=0.35%, 100=0.24% 00:27:36.357 cpu : usr=94.30%, sys=4.22%, ctx=18, majf=0, minf=9 00:27:36.357 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.357 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:36.357 00:27:36.357 Run status group 0 (all jobs): 00:27:36.357 READ: bw=86.9MiB/s (91.2MB/s), 26.6MiB/s-31.8MiB/s (27.9MB/s-33.3MB/s), io=874MiB (916MB), run=10006-10046msec 00:27:36.357 06:57:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:36.357 06:57:48 -- target/dif.sh@43 -- # local sub 00:27:36.357 06:57:48 -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.357 06:57:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.357 06:57:48 -- target/dif.sh@36 -- # local sub_id=0 00:27:36.357 06:57:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.357 06:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.357 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:27:36.357 06:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.357 06:57:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.357 06:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.357 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:27:36.357 06:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.357 00:27:36.357 real 0m11.165s 00:27:36.357 user 0m29.235s 00:27:36.357 sys 0m1.521s 00:27:36.357 06:57:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:36.357 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:27:36.357 ************************************ 00:27:36.357 END TEST fio_dif_digest 00:27:36.357 ************************************ 00:27:36.357 06:57:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:36.357 06:57:48 -- target/dif.sh@147 -- # nvmftestfini 00:27:36.357 06:57:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:36.357 06:57:48 -- nvmf/common.sh@116 -- # sync 00:27:36.357 06:57:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:36.357 06:57:48 -- nvmf/common.sh@119 -- # set +e 00:27:36.357 06:57:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:36.357 06:57:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:36.357 rmmod nvme_tcp 00:27:36.357 rmmod nvme_fabrics 00:27:36.357 rmmod nvme_keyring 00:27:36.357 06:57:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:36.357 06:57:48 -- nvmf/common.sh@123 -- # set -e 00:27:36.357 06:57:48 -- nvmf/common.sh@124 -- # return 0 00:27:36.357 06:57:48 -- nvmf/common.sh@477 -- # '[' -n 91674 ']' 00:27:36.357 06:57:48 -- nvmf/common.sh@478 -- # killprocess 91674 00:27:36.357 06:57:48 -- common/autotest_common.sh@936 -- # '[' -z 91674 ']' 00:27:36.357 06:57:48 -- common/autotest_common.sh@940 -- # kill -0 91674 00:27:36.357 06:57:48 -- common/autotest_common.sh@941 -- # uname 00:27:36.357 06:57:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:36.357 06:57:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91674 00:27:36.357 06:57:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:36.357 killing process with pid 91674 00:27:36.357 06:57:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:36.358 06:57:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91674' 00:27:36.358 06:57:48 -- common/autotest_common.sh@955 -- # kill 91674 00:27:36.358 06:57:48 -- common/autotest_common.sh@960 -- # wait 91674 00:27:36.358 06:57:49 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:36.358 06:57:49 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:36.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:36.358 Waiting for block devices as requested 00:27:36.358 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:36.358 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:36.358 06:57:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:36.358 06:57:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:36.358 06:57:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.358 06:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:36.358 06:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.358 06:57:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:36.358 00:27:36.358 real 1m1.091s 00:27:36.358 user 3m54.129s 00:27:36.358 sys 0m13.952s 00:27:36.358 06:57:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:36.358 06:57:49 -- common/autotest_common.sh@10 -- # set +x 00:27:36.358 ************************************ 00:27:36.358 END TEST nvmf_dif 00:27:36.358 ************************************ 00:27:36.358 06:57:49 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:36.358 06:57:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:36.358 06:57:49 -- common/autotest_common.sh@10 -- # set +x 00:27:36.358 ************************************ 00:27:36.358 START TEST nvmf_abort_qd_sizes 00:27:36.358 ************************************ 00:27:36.358 06:57:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:36.358 * Looking for test storage... 00:27:36.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:36.358 06:57:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:36.358 06:57:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:36.358 06:57:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:36.358 06:57:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:36.358 06:57:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:36.358 06:57:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:36.358 06:57:49 -- scripts/common.sh@335 -- # IFS=.-: 00:27:36.358 06:57:49 -- scripts/common.sh@335 -- # read -ra ver1 00:27:36.358 06:57:49 -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.358 06:57:49 -- scripts/common.sh@336 -- # read -ra ver2 00:27:36.358 06:57:49 -- scripts/common.sh@337 -- # local 'op=<' 00:27:36.358 06:57:49 -- scripts/common.sh@339 -- # ver1_l=2 00:27:36.358 06:57:49 -- scripts/common.sh@340 -- # ver2_l=1 00:27:36.358 06:57:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:36.358 06:57:49 -- scripts/common.sh@343 -- # case "$op" in 00:27:36.358 06:57:49 -- scripts/common.sh@344 -- # : 1 00:27:36.358 06:57:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:36.358 06:57:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.358 06:57:49 -- scripts/common.sh@364 -- # decimal 1 00:27:36.358 06:57:49 -- scripts/common.sh@352 -- # local d=1 00:27:36.358 06:57:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.358 06:57:49 -- scripts/common.sh@354 -- # echo 1 00:27:36.358 06:57:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:36.358 06:57:49 -- scripts/common.sh@365 -- # decimal 2 00:27:36.358 06:57:49 -- scripts/common.sh@352 -- # local d=2 00:27:36.358 06:57:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.358 06:57:49 -- scripts/common.sh@354 -- # echo 2 00:27:36.358 06:57:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:36.358 06:57:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:36.358 06:57:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:36.358 06:57:49 -- scripts/common.sh@367 -- # return 0 00:27:36.358 06:57:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:36.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.358 --rc genhtml_branch_coverage=1 00:27:36.358 --rc genhtml_function_coverage=1 00:27:36.358 --rc genhtml_legend=1 00:27:36.358 --rc geninfo_all_blocks=1 00:27:36.358 --rc geninfo_unexecuted_blocks=1 00:27:36.358 00:27:36.358 ' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:36.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.358 --rc genhtml_branch_coverage=1 00:27:36.358 --rc genhtml_function_coverage=1 00:27:36.358 --rc genhtml_legend=1 00:27:36.358 --rc geninfo_all_blocks=1 00:27:36.358 --rc geninfo_unexecuted_blocks=1 00:27:36.358 00:27:36.358 ' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:36.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.358 --rc genhtml_branch_coverage=1 00:27:36.358 --rc genhtml_function_coverage=1 00:27:36.358 --rc genhtml_legend=1 00:27:36.358 --rc geninfo_all_blocks=1 00:27:36.358 --rc geninfo_unexecuted_blocks=1 00:27:36.358 00:27:36.358 ' 00:27:36.358 06:57:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:36.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.358 --rc genhtml_branch_coverage=1 00:27:36.358 --rc genhtml_function_coverage=1 00:27:36.358 --rc genhtml_legend=1 00:27:36.358 --rc geninfo_all_blocks=1 00:27:36.358 --rc geninfo_unexecuted_blocks=1 00:27:36.358 00:27:36.358 ' 00:27:36.358 06:57:49 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.358 06:57:49 -- nvmf/common.sh@7 -- # uname -s 00:27:36.358 06:57:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.358 06:57:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.358 06:57:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.358 06:57:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.358 06:57:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.358 06:57:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.358 06:57:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.358 06:57:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.358 06:57:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.358 06:57:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 00:27:36.358 06:57:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=383f1f3e-75a9-4e00-b5aa-c669351be986 00:27:36.358 06:57:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.358 06:57:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.358 06:57:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.358 06:57:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.358 06:57:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.358 06:57:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.358 06:57:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.358 06:57:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.358 06:57:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.358 06:57:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.358 06:57:49 -- paths/export.sh@5 -- # export PATH 00:27:36.358 06:57:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.358 06:57:49 -- nvmf/common.sh@46 -- # : 0 00:27:36.358 06:57:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:36.358 06:57:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:36.358 06:57:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:36.358 06:57:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.358 06:57:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.358 06:57:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:36.358 06:57:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:36.358 06:57:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:36.358 06:57:49 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:27:36.358 06:57:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:36.358 06:57:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.358 06:57:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:36.358 06:57:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:36.358 06:57:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:36.358 06:57:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.358 06:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:36.358 06:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.358 06:57:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:36.358 06:57:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:36.358 06:57:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.358 06:57:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.358 06:57:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:36.358 06:57:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:36.358 06:57:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:36.359 06:57:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:36.359 06:57:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:36.359 06:57:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.359 06:57:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:36.359 06:57:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:36.359 06:57:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:36.359 06:57:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:36.359 06:57:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:36.359 06:57:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:36.359 Cannot find device "nvmf_tgt_br" 00:27:36.359 06:57:49 -- nvmf/common.sh@154 -- # true 00:27:36.359 06:57:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.359 Cannot find device "nvmf_tgt_br2" 00:27:36.359 06:57:49 -- nvmf/common.sh@155 -- # true 00:27:36.359 06:57:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:36.359 06:57:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:36.359 Cannot find device "nvmf_tgt_br" 00:27:36.359 06:57:49 -- nvmf/common.sh@157 -- # true 00:27:36.359 06:57:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:36.359 Cannot find device "nvmf_tgt_br2" 00:27:36.359 06:57:49 -- nvmf/common.sh@158 -- # true 00:27:36.359 06:57:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:36.359 06:57:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:36.359 06:57:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.359 06:57:50 -- nvmf/common.sh@161 -- # true 00:27:36.359 06:57:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.359 06:57:50 -- nvmf/common.sh@162 -- # true 00:27:36.359 06:57:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.359 06:57:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.359 06:57:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.359 06:57:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.359 06:57:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.359 06:57:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.359 06:57:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.359 06:57:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:36.359 06:57:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:36.359 06:57:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:36.359 06:57:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:36.359 06:57:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:36.359 06:57:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:36.359 06:57:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.359 06:57:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.359 06:57:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.359 06:57:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:36.359 06:57:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:36.359 06:57:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:36.359 06:57:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:36.359 06:57:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:36.359 06:57:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:36.359 06:57:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:36.359 06:57:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:36.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:27:36.359 00:27:36.359 --- 10.0.0.2 ping statistics --- 00:27:36.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.359 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:36.359 06:57:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:36.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:27:36.359 00:27:36.359 --- 10.0.0.3 ping statistics --- 00:27:36.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.359 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:36.359 06:57:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:36.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:27:36.359 00:27:36.359 --- 10.0.0.1 ping statistics --- 00:27:36.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.359 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:36.359 06:57:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.359 06:57:50 -- nvmf/common.sh@421 -- # return 0 00:27:36.359 06:57:50 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:36.359 06:57:50 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:36.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:37.192 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:37.192 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:27:37.192 06:57:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.192 06:57:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:37.192 06:57:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:37.192 06:57:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.192 06:57:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:37.192 06:57:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:37.192 06:57:51 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:27:37.192 06:57:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:37.192 06:57:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.192 06:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.192 06:57:51 -- nvmf/common.sh@469 -- # nvmfpid=93052 00:27:37.192 06:57:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:37.192 06:57:51 -- nvmf/common.sh@470 -- # waitforlisten 93052 00:27:37.192 06:57:51 -- common/autotest_common.sh@829 -- # '[' -z 93052 ']' 00:27:37.192 06:57:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.192 06:57:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.192 06:57:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.192 06:57:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.192 06:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.450 [2024-12-14 06:57:51.220410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:37.450 [2024-12-14 06:57:51.220547] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.450 [2024-12-14 06:57:51.377134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.709 [2024-12-14 06:57:51.508133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.709 [2024-12-14 06:57:51.508646] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.709 [2024-12-14 06:57:51.508818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.709 [2024-12-14 06:57:51.508923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.709 [2024-12-14 06:57:51.509144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.709 [2024-12-14 06:57:51.509271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.709 [2024-12-14 06:57:51.509815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.709 [2024-12-14 06:57:51.509854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.277 06:57:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.277 06:57:52 -- common/autotest_common.sh@862 -- # return 0 00:27:38.277 06:57:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:38.277 06:57:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.277 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 06:57:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:27:38.536 06:57:52 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:38.536 06:57:52 -- scripts/common.sh@312 -- # local nvmes 00:27:38.536 06:57:52 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:38.536 06:57:52 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:38.536 06:57:52 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:38.536 06:57:52 -- scripts/common.sh@297 -- # local bdf= 00:27:38.536 06:57:52 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:38.536 06:57:52 -- scripts/common.sh@232 -- # local class 00:27:38.536 06:57:52 -- scripts/common.sh@233 -- # local subclass 00:27:38.536 06:57:52 -- scripts/common.sh@234 -- # local progif 00:27:38.536 06:57:52 -- scripts/common.sh@235 -- # printf %02x 1 00:27:38.536 06:57:52 -- scripts/common.sh@235 -- # class=01 00:27:38.536 06:57:52 -- scripts/common.sh@236 -- # printf %02x 8 00:27:38.536 06:57:52 -- scripts/common.sh@236 -- # subclass=08 00:27:38.536 06:57:52 -- scripts/common.sh@237 -- # printf %02x 2 00:27:38.536 06:57:52 -- scripts/common.sh@237 -- # progif=02 00:27:38.536 06:57:52 -- scripts/common.sh@239 -- # hash lspci 00:27:38.536 06:57:52 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:38.536 06:57:52 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:38.536 06:57:52 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:38.536 06:57:52 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:38.536 06:57:52 -- scripts/common.sh@244 -- # tr -d '"' 00:27:38.536 06:57:52 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:38.536 06:57:52 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:38.536 06:57:52 -- scripts/common.sh@15 -- # local i 00:27:38.536 06:57:52 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:38.536 06:57:52 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:38.536 06:57:52 -- scripts/common.sh@24 -- # return 0 00:27:38.536 06:57:52 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:38.536 06:57:52 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:38.536 06:57:52 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:27:38.536 06:57:52 -- scripts/common.sh@15 -- # local i 00:27:38.536 06:57:52 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:27:38.536 06:57:52 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:38.536 06:57:52 -- scripts/common.sh@24 -- # return 0 00:27:38.536 06:57:52 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:27:38.536 06:57:52 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:38.536 06:57:52 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:38.536 06:57:52 -- scripts/common.sh@322 -- # uname -s 00:27:38.536 06:57:52 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:38.536 06:57:52 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:38.536 06:57:52 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:38.536 06:57:52 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:27:38.536 06:57:52 -- scripts/common.sh@322 -- # uname -s 00:27:38.536 06:57:52 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:38.536 06:57:52 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:38.536 06:57:52 -- scripts/common.sh@327 -- # (( 2 )) 00:27:38.536 06:57:52 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:27:38.536 06:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:38.536 06:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 ************************************ 00:27:38.536 START TEST spdk_target_abort 00:27:38.536 ************************************ 00:27:38.536 06:57:52 -- common/autotest_common.sh@1114 -- # spdk_target 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:27:38.536 06:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 spdk_targetn1 00:27:38.536 06:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.536 06:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 [2024-12-14 06:57:52.413152] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.536 06:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:27:38.536 06:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 06:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:27:38.536 06:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 06:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:27:38.536 06:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.536 06:57:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.536 [2024-12-14 06:57:52.441335] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.536 06:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:38.536 06:57:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:41.823 Initializing NVMe Controllers 00:27:41.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:41.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:41.824 Initialization complete. Launching workers. 00:27:41.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12100, failed: 0 00:27:41.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1076, failed to submit 11024 00:27:41.824 success 783, unsuccess 293, failed 0 00:27:41.824 06:57:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:41.824 06:57:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:45.110 Initializing NVMe Controllers 00:27:45.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:45.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:45.110 Initialization complete. Launching workers. 00:27:45.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5948, failed: 0 00:27:45.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1216, failed to submit 4732 00:27:45.110 success 262, unsuccess 954, failed 0 00:27:45.110 06:57:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:45.110 06:57:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:48.395 Initializing NVMe Controllers 00:27:48.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:48.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:48.395 Initialization complete. Launching workers. 00:27:48.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 33540, failed: 0 00:27:48.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2629, failed to submit 30911 00:27:48.395 success 561, unsuccess 2068, failed 0 00:27:48.395 06:58:02 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:48.395 06:58:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.395 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:27:48.395 06:58:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.395 06:58:02 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:48.395 06:58:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.395 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:27:48.962 06:58:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.962 06:58:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 93052 00:27:48.962 06:58:02 -- common/autotest_common.sh@936 -- # '[' -z 93052 ']' 00:27:48.962 06:58:02 -- common/autotest_common.sh@940 -- # kill -0 93052 00:27:48.962 06:58:02 -- common/autotest_common.sh@941 -- # uname 00:27:48.962 06:58:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:48.962 06:58:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93052 00:27:48.962 killing process with pid 93052 00:27:48.962 06:58:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:48.962 06:58:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:48.962 06:58:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93052' 00:27:48.962 06:58:02 -- common/autotest_common.sh@955 -- # kill 93052 00:27:48.962 06:58:02 -- common/autotest_common.sh@960 -- # wait 93052 00:27:49.221 00:27:49.221 real 0m10.835s 00:27:49.221 user 0m44.181s 00:27:49.221 sys 0m1.802s 00:27:49.221 06:58:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.221 06:58:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 ************************************ 00:27:49.221 END TEST spdk_target_abort 00:27:49.221 ************************************ 00:27:49.480 06:58:03 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:49.480 06:58:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:49.480 06:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.480 06:58:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.480 ************************************ 00:27:49.480 START TEST kernel_target_abort 00:27:49.480 ************************************ 00:27:49.480 06:58:03 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:49.480 06:58:03 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:49.480 06:58:03 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:49.480 06:58:03 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:49.480 06:58:03 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:49.480 06:58:03 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:49.480 06:58:03 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:49.480 06:58:03 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:49.480 06:58:03 -- nvmf/common.sh@627 -- # local block nvme 00:27:49.480 06:58:03 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:49.480 06:58:03 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:49.480 06:58:03 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:49.480 06:58:03 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:49.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:49.739 Waiting for block devices as requested 00:27:49.739 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.997 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.997 06:58:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:49.997 06:58:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:49.997 06:58:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:49.997 06:58:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:49.997 06:58:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:49.997 No valid GPT data, bailing 00:27:49.997 06:58:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:49.997 06:58:03 -- scripts/common.sh@393 -- # pt= 00:27:49.997 06:58:03 -- scripts/common.sh@394 -- # return 1 00:27:49.997 06:58:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:49.997 06:58:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:49.997 06:58:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:49.997 06:58:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:49.997 06:58:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:49.997 06:58:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:49.997 No valid GPT data, bailing 00:27:49.997 06:58:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:49.997 06:58:03 -- scripts/common.sh@393 -- # pt= 00:27:49.997 06:58:03 -- scripts/common.sh@394 -- # return 1 00:27:49.997 06:58:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:49.997 06:58:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:49.997 06:58:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:49.997 06:58:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:49.997 06:58:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:49.997 06:58:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:50.256 No valid GPT data, bailing 00:27:50.256 06:58:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:50.256 06:58:04 -- scripts/common.sh@393 -- # pt= 00:27:50.256 06:58:04 -- scripts/common.sh@394 -- # return 1 00:27:50.256 06:58:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:50.256 06:58:04 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:50.256 06:58:04 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:50.256 06:58:04 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:50.256 06:58:04 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:50.256 06:58:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:50.256 No valid GPT data, bailing 00:27:50.256 06:58:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:50.256 06:58:04 -- scripts/common.sh@393 -- # pt= 00:27:50.256 06:58:04 -- scripts/common.sh@394 -- # return 1 00:27:50.256 06:58:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:50.256 06:58:04 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:50.256 06:58:04 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:50.256 06:58:04 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:50.256 06:58:04 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:50.256 06:58:04 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:50.256 06:58:04 -- nvmf/common.sh@654 -- # echo 1 00:27:50.256 06:58:04 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:50.256 06:58:04 -- nvmf/common.sh@656 -- # echo 1 00:27:50.256 06:58:04 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:50.256 06:58:04 -- nvmf/common.sh@663 -- # echo tcp 00:27:50.256 06:58:04 -- nvmf/common.sh@664 -- # echo 4420 00:27:50.256 06:58:04 -- nvmf/common.sh@665 -- # echo ipv4 00:27:50.256 06:58:04 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:50.256 06:58:04 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:383f1f3e-75a9-4e00-b5aa-c669351be986 --hostid=383f1f3e-75a9-4e00-b5aa-c669351be986 -a 10.0.0.1 -t tcp -s 4420 00:27:50.256 00:27:50.256 Discovery Log Number of Records 2, Generation counter 2 00:27:50.256 =====Discovery Log Entry 0====== 00:27:50.256 trtype: tcp 00:27:50.256 adrfam: ipv4 00:27:50.256 subtype: current discovery subsystem 00:27:50.256 treq: not specified, sq flow control disable supported 00:27:50.256 portid: 1 00:27:50.256 trsvcid: 4420 00:27:50.256 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:50.256 traddr: 10.0.0.1 00:27:50.256 eflags: none 00:27:50.256 sectype: none 00:27:50.256 =====Discovery Log Entry 1====== 00:27:50.257 trtype: tcp 00:27:50.257 adrfam: ipv4 00:27:50.257 subtype: nvme subsystem 00:27:50.257 treq: not specified, sq flow control disable supported 00:27:50.257 portid: 1 00:27:50.257 trsvcid: 4420 00:27:50.257 subnqn: kernel_target 00:27:50.257 traddr: 10.0.0.1 00:27:50.257 eflags: none 00:27:50.257 sectype: none 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:50.257 06:58:04 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:53.573 Initializing NVMe Controllers 00:27:53.573 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:53.573 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:53.573 Initialization complete. Launching workers. 00:27:53.573 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31165, failed: 0 00:27:53.573 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31165, failed to submit 0 00:27:53.573 success 0, unsuccess 31165, failed 0 00:27:53.573 06:58:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:53.573 06:58:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:56.861 Initializing NVMe Controllers 00:27:56.861 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:56.861 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:56.861 Initialization complete. Launching workers. 00:27:56.861 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68456, failed: 0 00:27:56.861 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28865, failed to submit 39591 00:27:56.861 success 0, unsuccess 28865, failed 0 00:27:56.861 06:58:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:56.861 06:58:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:28:00.148 Initializing NVMe Controllers 00:28:00.148 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:28:00.148 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:28:00.148 Initialization complete. Launching workers. 00:28:00.148 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 78770, failed: 0 00:28:00.148 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19674, failed to submit 59096 00:28:00.148 success 0, unsuccess 19674, failed 0 00:28:00.148 06:58:13 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:28:00.148 06:58:13 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:28:00.148 06:58:13 -- nvmf/common.sh@677 -- # echo 0 00:28:00.148 06:58:13 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:28:00.148 06:58:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:28:00.148 06:58:13 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:00.148 06:58:13 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:28:00.148 06:58:13 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:28:00.148 06:58:13 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:28:00.148 ************************************ 00:28:00.148 END TEST kernel_target_abort 00:28:00.148 ************************************ 00:28:00.148 00:28:00.148 real 0m10.524s 00:28:00.148 user 0m5.402s 00:28:00.148 sys 0m2.511s 00:28:00.148 06:58:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:00.148 06:58:13 -- common/autotest_common.sh@10 -- # set +x 00:28:00.148 06:58:13 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:28:00.148 06:58:13 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:28:00.148 06:58:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.148 06:58:13 -- nvmf/common.sh@116 -- # sync 00:28:00.148 06:58:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.148 06:58:13 -- nvmf/common.sh@119 -- # set +e 00:28:00.148 06:58:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.148 06:58:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.148 rmmod nvme_tcp 00:28:00.148 rmmod nvme_fabrics 00:28:00.148 rmmod nvme_keyring 00:28:00.148 06:58:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.148 Process with pid 93052 is not found 00:28:00.148 06:58:13 -- nvmf/common.sh@123 -- # set -e 00:28:00.149 06:58:13 -- nvmf/common.sh@124 -- # return 0 00:28:00.149 06:58:13 -- nvmf/common.sh@477 -- # '[' -n 93052 ']' 00:28:00.149 06:58:13 -- nvmf/common.sh@478 -- # killprocess 93052 00:28:00.149 06:58:13 -- common/autotest_common.sh@936 -- # '[' -z 93052 ']' 00:28:00.149 06:58:13 -- common/autotest_common.sh@940 -- # kill -0 93052 00:28:00.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (93052) - No such process 00:28:00.149 06:58:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 93052 is not found' 00:28:00.149 06:58:13 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:28:00.149 06:58:13 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:00.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:00.716 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:00.716 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:00.716 06:58:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:00.716 06:58:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:00.716 06:58:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.716 06:58:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:00.716 06:58:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.716 06:58:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:00.716 06:58:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.716 06:58:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:00.716 00:28:00.716 real 0m25.027s 00:28:00.716 user 0m51.089s 00:28:00.716 sys 0m5.676s 00:28:00.716 06:58:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:00.716 06:58:14 -- common/autotest_common.sh@10 -- # set +x 00:28:00.716 ************************************ 00:28:00.716 END TEST nvmf_abort_qd_sizes 00:28:00.716 ************************************ 00:28:00.974 06:58:14 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:00.975 06:58:14 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:00.975 06:58:14 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:00.975 06:58:14 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:00.975 06:58:14 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:00.975 06:58:14 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:00.975 06:58:14 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:00.975 06:58:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.975 06:58:14 -- common/autotest_common.sh@10 -- # set +x 00:28:00.975 06:58:14 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:00.975 06:58:14 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:00.975 06:58:14 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:00.975 06:58:14 -- common/autotest_common.sh@10 -- # set +x 00:28:02.878 INFO: APP EXITING 00:28:02.878 INFO: killing all VMs 00:28:02.878 INFO: killing vhost app 00:28:02.878 INFO: EXIT DONE 00:28:03.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:03.137 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:03.137 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:04.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:04.073 Cleaning 00:28:04.073 Removing: /var/run/dpdk/spdk0/config 00:28:04.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:04.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:04.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:04.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:04.073 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:04.073 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:04.073 Removing: /var/run/dpdk/spdk1/config 00:28:04.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:04.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:04.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:04.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:04.073 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:04.073 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:04.073 Removing: /var/run/dpdk/spdk2/config 00:28:04.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:04.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:04.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:04.073 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:04.073 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:04.073 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:04.073 Removing: /var/run/dpdk/spdk3/config 00:28:04.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:04.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:04.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:04.073 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:04.073 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:04.073 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:04.073 Removing: /var/run/dpdk/spdk4/config 00:28:04.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:04.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:04.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:04.073 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:04.073 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:04.073 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:04.073 Removing: /dev/shm/nvmf_trace.0 00:28:04.073 Removing: /dev/shm/spdk_tgt_trace.pid55532 00:28:04.073 Removing: /var/run/dpdk/spdk0 00:28:04.073 Removing: /var/run/dpdk/spdk1 00:28:04.073 Removing: /var/run/dpdk/spdk2 00:28:04.073 Removing: /var/run/dpdk/spdk3 00:28:04.073 Removing: /var/run/dpdk/spdk4 00:28:04.073 Removing: /var/run/dpdk/spdk_pid55369 00:28:04.073 Removing: /var/run/dpdk/spdk_pid55532 00:28:04.073 Removing: /var/run/dpdk/spdk_pid55853 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56128 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56311 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56401 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56500 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56603 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56646 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56677 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56751 00:28:04.073 Removing: /var/run/dpdk/spdk_pid56863 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57507 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57571 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57640 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57668 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57771 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57798 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57884 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57912 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57969 00:28:04.073 Removing: /var/run/dpdk/spdk_pid57999 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58049 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58081 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58253 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58294 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58376 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58451 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58475 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58539 00:28:04.073 Removing: /var/run/dpdk/spdk_pid58564 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58599 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58618 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58657 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58678 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58712 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58732 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58766 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58786 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58820 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58840 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58874 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58894 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58928 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58948 00:28:04.332 Removing: /var/run/dpdk/spdk_pid58981 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59002 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59032 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59058 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59087 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59112 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59147 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59166 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59202 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59221 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59256 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59277 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59318 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59339 00:28:04.332 Removing: /var/run/dpdk/spdk_pid59373 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59393 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59427 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59450 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59492 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59510 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59553 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59567 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59607 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59621 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59662 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59739 00:28:04.333 Removing: /var/run/dpdk/spdk_pid59857 00:28:04.333 Removing: /var/run/dpdk/spdk_pid60303 00:28:04.333 Removing: /var/run/dpdk/spdk_pid67294 00:28:04.333 Removing: /var/run/dpdk/spdk_pid67653 00:28:04.333 Removing: /var/run/dpdk/spdk_pid70072 00:28:04.333 Removing: /var/run/dpdk/spdk_pid70458 00:28:04.333 Removing: /var/run/dpdk/spdk_pid70700 00:28:04.333 Removing: /var/run/dpdk/spdk_pid70749 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71023 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71030 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71083 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71141 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71207 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71245 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71247 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71277 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71315 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71319 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71382 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71435 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71495 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71539 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71541 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71572 00:28:04.333 Removing: /var/run/dpdk/spdk_pid71874 00:28:04.333 Removing: /var/run/dpdk/spdk_pid72027 00:28:04.333 Removing: /var/run/dpdk/spdk_pid72295 00:28:04.333 Removing: /var/run/dpdk/spdk_pid72341 00:28:04.333 Removing: /var/run/dpdk/spdk_pid72732 00:28:04.333 Removing: /var/run/dpdk/spdk_pid73265 00:28:04.333 Removing: /var/run/dpdk/spdk_pid73691 00:28:04.333 Removing: /var/run/dpdk/spdk_pid74676 00:28:04.333 Removing: /var/run/dpdk/spdk_pid75670 00:28:04.333 Removing: /var/run/dpdk/spdk_pid75786 00:28:04.333 Removing: /var/run/dpdk/spdk_pid75856 00:28:04.333 Removing: /var/run/dpdk/spdk_pid77360 00:28:04.333 Removing: /var/run/dpdk/spdk_pid77611 00:28:04.333 Removing: /var/run/dpdk/spdk_pid78067 00:28:04.333 Removing: /var/run/dpdk/spdk_pid78178 00:28:04.333 Removing: /var/run/dpdk/spdk_pid78324 00:28:04.333 Removing: /var/run/dpdk/spdk_pid78370 00:28:04.592 Removing: /var/run/dpdk/spdk_pid78415 00:28:04.592 Removing: /var/run/dpdk/spdk_pid78461 00:28:04.592 Removing: /var/run/dpdk/spdk_pid78624 00:28:04.592 Removing: /var/run/dpdk/spdk_pid78777 00:28:04.592 Removing: /var/run/dpdk/spdk_pid79041 00:28:04.592 Removing: /var/run/dpdk/spdk_pid79164 00:28:04.592 Removing: /var/run/dpdk/spdk_pid79589 00:28:04.592 Removing: /var/run/dpdk/spdk_pid79978 00:28:04.592 Removing: /var/run/dpdk/spdk_pid79984 00:28:04.592 Removing: /var/run/dpdk/spdk_pid82237 00:28:04.592 Removing: /var/run/dpdk/spdk_pid82558 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83069 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83078 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83430 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83444 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83458 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83491 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83496 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83645 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83647 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83750 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83756 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83861 00:28:04.592 Removing: /var/run/dpdk/spdk_pid83863 00:28:04.592 Removing: /var/run/dpdk/spdk_pid84355 00:28:04.592 Removing: /var/run/dpdk/spdk_pid84404 00:28:04.592 Removing: /var/run/dpdk/spdk_pid84556 00:28:04.592 Removing: /var/run/dpdk/spdk_pid84677 00:28:04.592 Removing: /var/run/dpdk/spdk_pid85082 00:28:04.592 Removing: /var/run/dpdk/spdk_pid85333 00:28:04.592 Removing: /var/run/dpdk/spdk_pid85830 00:28:04.592 Removing: /var/run/dpdk/spdk_pid86392 00:28:04.592 Removing: /var/run/dpdk/spdk_pid86870 00:28:04.592 Removing: /var/run/dpdk/spdk_pid86960 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87056 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87142 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87306 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87398 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87489 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87580 00:28:04.592 Removing: /var/run/dpdk/spdk_pid87944 00:28:04.592 Removing: /var/run/dpdk/spdk_pid88650 00:28:04.592 Removing: /var/run/dpdk/spdk_pid90018 00:28:04.592 Removing: /var/run/dpdk/spdk_pid90220 00:28:04.592 Removing: /var/run/dpdk/spdk_pid90504 00:28:04.592 Removing: /var/run/dpdk/spdk_pid90816 00:28:04.592 Removing: /var/run/dpdk/spdk_pid91378 00:28:04.592 Removing: /var/run/dpdk/spdk_pid91384 00:28:04.592 Removing: /var/run/dpdk/spdk_pid91745 00:28:04.592 Removing: /var/run/dpdk/spdk_pid91915 00:28:04.592 Removing: /var/run/dpdk/spdk_pid92075 00:28:04.592 Removing: /var/run/dpdk/spdk_pid92171 00:28:04.592 Removing: /var/run/dpdk/spdk_pid92326 00:28:04.592 Removing: /var/run/dpdk/spdk_pid92435 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93121 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93155 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93186 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93436 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93471 00:28:04.592 Removing: /var/run/dpdk/spdk_pid93507 00:28:04.592 Clean 00:28:04.851 killing process with pid 49776 00:28:04.851 killing process with pid 49779 00:28:04.851 06:58:18 -- common/autotest_common.sh@1446 -- # return 0 00:28:04.851 06:58:18 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:04.851 06:58:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.851 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:28:04.851 06:58:18 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:04.851 06:58:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.851 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:28:04.851 06:58:18 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:04.851 06:58:18 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:04.851 06:58:18 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:04.851 06:58:18 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:04.851 06:58:18 -- spdk/autotest.sh@383 -- # hostname 00:28:04.851 06:58:18 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:05.109 geninfo: WARNING: invalid characters removed from testname! 00:28:27.055 06:58:39 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:29.612 06:58:43 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.515 06:58:45 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:34.048 06:58:48 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:36.582 06:58:50 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:39.113 06:58:52 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:41.020 06:58:54 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:41.278 06:58:55 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:41.278 06:58:55 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:41.278 06:58:55 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:41.278 06:58:55 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:41.278 06:58:55 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:41.278 06:58:55 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:41.278 06:58:55 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:41.278 06:58:55 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:41.278 06:58:55 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:41.278 06:58:55 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:41.278 06:58:55 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:41.278 06:58:55 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:41.278 06:58:55 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:41.278 06:58:55 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:41.278 06:58:55 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:41.278 06:58:55 -- scripts/common.sh@343 -- $ case "$op" in 00:28:41.278 06:58:55 -- scripts/common.sh@344 -- $ : 1 00:28:41.278 06:58:55 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:41.278 06:58:55 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.278 06:58:55 -- scripts/common.sh@364 -- $ decimal 1 00:28:41.278 06:58:55 -- scripts/common.sh@352 -- $ local d=1 00:28:41.278 06:58:55 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:41.278 06:58:55 -- scripts/common.sh@354 -- $ echo 1 00:28:41.278 06:58:55 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:41.278 06:58:55 -- scripts/common.sh@365 -- $ decimal 2 00:28:41.278 06:58:55 -- scripts/common.sh@352 -- $ local d=2 00:28:41.278 06:58:55 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:41.278 06:58:55 -- scripts/common.sh@354 -- $ echo 2 00:28:41.278 06:58:55 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:41.278 06:58:55 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:41.278 06:58:55 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:41.278 06:58:55 -- scripts/common.sh@367 -- $ return 0 00:28:41.278 06:58:55 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.278 06:58:55 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:41.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.278 --rc genhtml_branch_coverage=1 00:28:41.279 --rc genhtml_function_coverage=1 00:28:41.279 --rc genhtml_legend=1 00:28:41.279 --rc geninfo_all_blocks=1 00:28:41.279 --rc geninfo_unexecuted_blocks=1 00:28:41.279 00:28:41.279 ' 00:28:41.279 06:58:55 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.279 --rc genhtml_branch_coverage=1 00:28:41.279 --rc genhtml_function_coverage=1 00:28:41.279 --rc genhtml_legend=1 00:28:41.279 --rc geninfo_all_blocks=1 00:28:41.279 --rc geninfo_unexecuted_blocks=1 00:28:41.279 00:28:41.279 ' 00:28:41.279 06:58:55 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.279 --rc genhtml_branch_coverage=1 00:28:41.279 --rc genhtml_function_coverage=1 00:28:41.279 --rc genhtml_legend=1 00:28:41.279 --rc geninfo_all_blocks=1 00:28:41.279 --rc geninfo_unexecuted_blocks=1 00:28:41.279 00:28:41.279 ' 00:28:41.279 06:58:55 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.279 --rc genhtml_branch_coverage=1 00:28:41.279 --rc genhtml_function_coverage=1 00:28:41.279 --rc genhtml_legend=1 00:28:41.279 --rc geninfo_all_blocks=1 00:28:41.279 --rc geninfo_unexecuted_blocks=1 00:28:41.279 00:28:41.279 ' 00:28:41.279 06:58:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:41.279 06:58:55 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:41.279 06:58:55 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.279 06:58:55 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.279 06:58:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.279 06:58:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.279 06:58:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.279 06:58:55 -- paths/export.sh@5 -- $ export PATH 00:28:41.279 06:58:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.279 06:58:55 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:41.279 06:58:55 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:41.279 06:58:55 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734159535.XXXXXX 00:28:41.279 06:58:55 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734159535.J2jiPL 00:28:41.279 06:58:55 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:41.279 06:58:55 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:41.279 06:58:55 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:41.279 06:58:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:41.279 06:58:55 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:41.279 06:58:55 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:41.279 06:58:55 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:41.279 06:58:55 -- common/autotest_common.sh@10 -- $ set +x 00:28:41.279 06:58:55 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:28:41.279 06:58:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:41.279 06:58:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:41.279 06:58:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:41.279 06:58:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:41.279 06:58:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:41.279 06:58:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:41.279 06:58:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:41.279 06:58:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:41.279 06:58:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:41.279 06:58:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:41.279 + [[ -n 5236 ]] 00:28:41.279 + sudo kill 5236 00:28:41.545 [Pipeline] } 00:28:41.559 [Pipeline] // timeout 00:28:41.564 [Pipeline] } 00:28:41.577 [Pipeline] // stage 00:28:41.581 [Pipeline] } 00:28:41.594 [Pipeline] // catchError 00:28:41.603 [Pipeline] stage 00:28:41.605 [Pipeline] { (Stop VM) 00:28:41.616 [Pipeline] sh 00:28:41.895 + vagrant halt 00:28:45.187 ==> default: Halting domain... 00:28:49.387 [Pipeline] sh 00:28:49.666 + vagrant destroy -f 00:28:52.950 ==> default: Removing domain... 00:28:52.962 [Pipeline] sh 00:28:53.240 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:53.249 [Pipeline] } 00:28:53.263 [Pipeline] // stage 00:28:53.269 [Pipeline] } 00:28:53.282 [Pipeline] // dir 00:28:53.288 [Pipeline] } 00:28:53.299 [Pipeline] // wrap 00:28:53.303 [Pipeline] } 00:28:53.311 [Pipeline] // catchError 00:28:53.319 [Pipeline] stage 00:28:53.320 [Pipeline] { (Epilogue) 00:28:53.330 [Pipeline] sh 00:28:53.606 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:58.887 [Pipeline] catchError 00:28:58.889 [Pipeline] { 00:28:58.902 [Pipeline] sh 00:28:59.182 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:59.440 Artifacts sizes are good 00:28:59.449 [Pipeline] } 00:28:59.462 [Pipeline] // catchError 00:28:59.473 [Pipeline] archiveArtifacts 00:28:59.480 Archiving artifacts 00:28:59.600 [Pipeline] cleanWs 00:28:59.611 [WS-CLEANUP] Deleting project workspace... 00:28:59.611 [WS-CLEANUP] Deferred wipeout is used... 00:28:59.617 [WS-CLEANUP] done 00:28:59.619 [Pipeline] } 00:28:59.635 [Pipeline] // stage 00:28:59.640 [Pipeline] } 00:28:59.654 [Pipeline] // node 00:28:59.659 [Pipeline] End of Pipeline 00:28:59.696 Finished: SUCCESS